The IBM XIV® Storage System Gen3 is a versatile, high-end disk storage solution with an
innovative grid architecture that can provide clients excellent performance and scalability
while significantly reducing costs and complexity. XIV includes automated data placement
that needs no tuning as application workloads change.
Emulex OneConnect Universal Converged Network Adapter (UCNA) platform enables efficient, robust and high-performance connectivity for all business applications while protecting IT investment in existing LAN and SAN infrastructure.
Emulex OneConnect Universal Converged Network Adapter (UCNA) platform enables efficient, robust and high-performance connectivity for all business applications while protecting IT investment in existing LAN and SAN infrastructure.
The Basic Name Server (BNS) provides a mapping between logical names (symbols and aliases) and corresponding items. BNS stores these mappings in a local database and distributes global mappings to other nodes. BNS notifies applications of changes to mappings and allows applications to look up items using logical names or modify name-item mappings through library functions.
Network Virtualization in Windows Server 2012Lai Yoong Seng
This document provides a summary of a session on networking in Windows Server 2012. The session objectives are to understand scenarios for using networking in Windows Server 2012 and see demonstrations. Key takeaways include that networking in Windows Server 2012 is designed for cloud computing and provides dynamic memory, quality of service, IP address continuity with DHCP failover, and network virtualization. The document also summarizes features like SR-IOV, SMB Direct, IP address management, branch cache, the Hyper-V extensible switch, and network virtualization partners.
Understanding the lock manager internals via the fb_lock_print utility
This session will provide a short introduction to the Firebird lock manager and its usage patterns. It will describe how the lock manager can affect the performance of highly loaded systems and outlines the possible bottlenecks and other problems like unexpected lock-ups/freezes that may require special analysis. The structure of the lock table will also be explained.
It will also include a detailed description of the fb_lock_print utility and its usage that will enable the research of issues that are related to the lock manager. A few practical examples illustrating how to analyze the utility output will be provided. This session is mainly of interest to Classic Server users and DBAs.
The document provides an overview of new features in Exchange 2013, including architectural changes, client access improvements, integration with SharePoint and Lync, and administrative tools. Key changes include a simplified two-role architecture using Client Access Servers and Mailbox Servers, public folders now stored in mailboxes, improved compliance features, and tighter integration across Microsoft collaboration products. Administration is now done through a new web-based Exchange Administrative Center.
This document is the user manual for version 4.0.0 of MatrikonOPC Tunneller. It was last updated on June 3, 2011 and details the software's history and revisions. The manual provides instructions on installation, configuration, usage and troubleshooting of MatrikonOPC Tunneller. It also outlines the software and system requirements and includes appendices with additional reference material.
High Performance Computing Infrastructure: Past, Present, and Futurekarl.barnes
This document discusses high performance computing infrastructure from the past to present and future. It begins with an introduction to reconfigurable computing and describes the Bison Configurable Digital Signal Processor and its design flow. It discusses function cores and modules that have been developed. It also describes a remote reconfigurable computer called RARE and a parallel and configurable computer system. Finally, it discusses high performance weather forecast modeling and a proposed reconfigurable and open architecture module for unmanned systems.
Emulex OneConnect Universal Converged Network Adapter (UCNA) platform enables efficient, robust and high-performance connectivity for all business applications while protecting IT investment in existing LAN and SAN infrastructure.
Emulex OneConnect Universal Converged Network Adapter (UCNA) platform enables efficient, robust and high-performance connectivity for all business applications while protecting IT investment in existing LAN and SAN infrastructure.
The Basic Name Server (BNS) provides a mapping between logical names (symbols and aliases) and corresponding items. BNS stores these mappings in a local database and distributes global mappings to other nodes. BNS notifies applications of changes to mappings and allows applications to look up items using logical names or modify name-item mappings through library functions.
Network Virtualization in Windows Server 2012Lai Yoong Seng
This document provides a summary of a session on networking in Windows Server 2012. The session objectives are to understand scenarios for using networking in Windows Server 2012 and see demonstrations. Key takeaways include that networking in Windows Server 2012 is designed for cloud computing and provides dynamic memory, quality of service, IP address continuity with DHCP failover, and network virtualization. The document also summarizes features like SR-IOV, SMB Direct, IP address management, branch cache, the Hyper-V extensible switch, and network virtualization partners.
Understanding the lock manager internals via the fb_lock_print utility
This session will provide a short introduction to the Firebird lock manager and its usage patterns. It will describe how the lock manager can affect the performance of highly loaded systems and outlines the possible bottlenecks and other problems like unexpected lock-ups/freezes that may require special analysis. The structure of the lock table will also be explained.
It will also include a detailed description of the fb_lock_print utility and its usage that will enable the research of issues that are related to the lock manager. A few practical examples illustrating how to analyze the utility output will be provided. This session is mainly of interest to Classic Server users and DBAs.
The document provides an overview of new features in Exchange 2013, including architectural changes, client access improvements, integration with SharePoint and Lync, and administrative tools. Key changes include a simplified two-role architecture using Client Access Servers and Mailbox Servers, public folders now stored in mailboxes, improved compliance features, and tighter integration across Microsoft collaboration products. Administration is now done through a new web-based Exchange Administrative Center.
This document is the user manual for version 4.0.0 of MatrikonOPC Tunneller. It was last updated on June 3, 2011 and details the software's history and revisions. The manual provides instructions on installation, configuration, usage and troubleshooting of MatrikonOPC Tunneller. It also outlines the software and system requirements and includes appendices with additional reference material.
High Performance Computing Infrastructure: Past, Present, and Futurekarl.barnes
This document discusses high performance computing infrastructure from the past to present and future. It begins with an introduction to reconfigurable computing and describes the Bison Configurable Digital Signal Processor and its design flow. It discusses function cores and modules that have been developed. It also describes a remote reconfigurable computer called RARE and a parallel and configurable computer system. Finally, it discusses high performance weather forecast modeling and a proposed reconfigurable and open architecture module for unmanned systems.
IBM announced new Power Systems models on February 5, 2013, including significant updates to the Power Systems product line. The announcements included new Power 710, 720, 730, 740, 750, and 760 systems with Power7+ processors offering increased performance, memory, storage, and partitioning capabilities. New Solution Editions of AIX for Cognos and SPSS and IBM i 7.1 technology refresh were also announced.
This document discusses how data is increasingly dominating high performance computing workloads. It notes that while computing power doubles every two years, data storage and movement capabilities are not keeping pace. This is leading to a "data tsunami" as experiments and simulations generate terabytes of data per day. The document then summarizes Sun Microsystems' end-to-end infrastructure for data-centric HPC workflows, including their Lustre parallel storage system, unified storage, tape archives, high performance computing blades, and InfiniBand switches. It positions Sun as uniquely able to deliver an integrated solution from computation to long-term data retention to help users cope with the challenges posed by rapidly growing datasets.
1. Nippon Steel Corporation upgraded the old process computer system for its continuous casting plant to reduce costs by introducing general-purpose PC servers, OS, and middleware originally developed. This was the first time a general purpose RDB was applied to a continuous casting plant.
2. The upgrade aimed to reduce hardware/software costs and application development costs through the use of commercial tools and an original program code generator. An evaluation system was first implemented to test reliability.
3. A spiral development approach was used instead of waterfall, allowing early verification of specifications and improvements based on user feedback, reducing development time and costs compared to the previous system.
This talk/tutorial was one that I delivered to multiple organizations -- ranging from semiconductor houses, to start-up system vendors, to research and academic institutions, back in the 2002 time frame. As the abstract below illustrates, it captures the key essence & principles behind the router designs of two of the most popular and landmark switch/routers in our industry -- the Cisco...
Microsoft Exchange Server 2007 High Availability And Disaster Recovery Deep Diversnarayanan
This document discusses various Exchange Server disaster recovery and high availability solutions such as continuous replication (CCR), standby continuous replication (SCR), local continuous replication (LCR), and single copy cluster (SCC). It provides details on how each solution works, when to use each one, advantages and disadvantages of CCR versus SCC, and basics of how continuous replication functions in Exchange. It also covers topics like transport dumpster redelivery, lost log resilience, and circular logging.
This document summarizes Google's infrastructure for distributed data storage and processing.
It describes Google File System (GFS), which scales to thousands of servers through replication and partitioning files into chunks distributed across servers. GFS uses a master/slave architecture and optimizes for large reads and writes of append-only files.
It also describes MapReduce, Google's programming model for distributed data processing. MapReduce allows parallelization through a map function that processes key-value pairs, followed by an optional combine and a reduce function to aggregate results. This provides fault tolerance and allows computation to move to data.
Converged Data Center: FCoE, iSCSI and the Future of Storage NetworkingEMC
From ( EMCWorld 2011 ) : This session explores the opportunities and challenges of using a single network to support both storage and networking. The Fibre Channel over Ethernet (FCoE) and iSCSI (SCSI over TCP/IP) protocols offer two approaches for supporting storage over Ethernet. Standards, technologies and deployment scenarios for both protocols are covered, along with the future of storage networking technology.
Hadoop 2.0 offers significant HDFS improvements: new append-pipeline, federation, wire compatibility, NameNode HA, performance improvements,
etc. We describe these features and their benefits. We also discuss development that is underway for the next HDFS release. This includes much needed data management features such as Snapshots and Disaster Recovery. We add support for different classes of storage devices such as SSDs and open interfaces such as NFS; together these extend HDFS as a more general storage system. As with every release we will continue improvements to performance, diagnosability and manageability of HDFS.
Windows Server 2012 includes several new and improved networking features for Hyper-V. These features help improve performance and scalability by offloading more processing to the network interface card. New features include improved Receive Side Scaling, Receive Segment Coalescing, Dynamic Virtual Machine Queuing, Single Root I/O Virtualization, and NIC teaming. These features address challenges around availability, reliability, security and reducing complexity for virtualized workloads.
The document summarizes the key differences and improvements between AMD's "Bobcat" and "Jaguar" CPU cores. "Jaguar" features over 10% higher core frequencies, over 15% higher instructions per clock, double the core count from two to four cores, and a larger shared 2MB L2 cache compared to "Bobcat's" 512KB per core cache. "Jaguar" also includes additional instruction sets, a larger 40-bit physical address capability, and a 128-bit floating point unit providing better performance and capabilities than "Bobcat".
The document discusses Java Platform as a Service (PaaS) offerings. It begins by explaining the importance of PaaS and how it provides benefits like increased agility and reduced costs. It then reviews existing Java PaaS options like Google App Engine, Amazon Elastic Beanstalk, and CloudBees. It notes limitations of Google App Engine related to APIs and constraints. It describes Amazon Elastic Beanstalk and CloudBees as offering more flexibility but still relying on underlying infrastructure as a service platforms. The document advocates that the Java virtual machine is well suited for cloud computing due to its ability to manage resources.
This document summarizes new features and enhancements in IBM XIV Storage System R3.4/11.4, which was generally available in November 2013. Key highlights include data-at-rest encryption with KMIP support, expanded scale and flash caching, integration with OpenStack and IBM SmartCloud Storage Access for private clouds, as well as performance optimizations for Microsoft Exchange 2013. Upcoming multi-tenancy support is also noted to help cloud providers accommodate customer security requirements.
The document discusses IBM C9020-971 certification and resources to help pass the exam, including dumps, study guides, practice exams, and demo questions and answers from pass4sures.co. It promotes pass4sures.co as offering the latest exam preparation materials to ensure users pass the C9020-971 exam on their first attempt, with a money back guarantee if they fail. The document provides links to purchase study materials for the C9020-971 certification from pass4sures.co.
The document discusses IBM's XIV storage array. It provides details on:
- XIV's simplicity and ease of use with no tuning required
- XIV's scalability to multi-petabyte capacities through its grid architecture
- New features of XIV including real-time compression, Microsoft Azure integration, and quality of service classes
This document discusses IBM's XIV storage system and its advantages over traditional storage. Some key points:
- XIV provides radical simplicity through its self-optimizing architecture that eliminates hotspots and manual tuning. It also offers enterprise features like snapshots and mirroring at no extra cost.
- Its scale-out architecture allows it to scale capacity and performance non-disruptively. It can also migrate volumes between systems transparently to hosts.
- Benchmark tests show it achieves exceptional performance and price/performance results. Customers report it is easier to deploy and manage than other solutions and provides more predictable performance.
XIV is a grid-based storage system that was acquired by IBM in 2007. It uses a unique architecture that distributes data evenly across all drives to avoid hotspots and ensure balanced performance even as the system scales. Key features like snapshots, replication, and thin provisioning are included at no additional charge. The document provides details on XIV's history, architecture, scalability, and comparison to traditional storage systems.
This document discusses IBM's Spectrum Storage portfolio, which provides software solutions to accelerate, optimize, virtualize and streamline storage environments. It aims to simplify storage management across traditional and new applications, deliver scalability and high performance for analytics and big data, unify data silos, optimize data economics through intelligent tiering, and support hybrid cloud and industry standards. Specific IBM Spectrum products are highlighted, including Spectrum Protect for backup and recovery, Spectrum Control for storage management, Spectrum Archive for active archiving, Spectrum Scale for high performance file and object storage, and Spectrum Virtualize for software defined storage virtualization.
This document summarizes a test of an IBM XIV Gen3 Storage System Model 2810/114 to support 120,000 mailboxes for an Exchange 2010 environment. The solution uses two XIV Gen3 frames with a total usable capacity of 243TB per frame. Each frame is connected to 12 IBM System x3650 servers running Exchange 2010 across two database availability groups (DAGs) of 24 servers each. Databases and their copies are distributed equally across the two storage frames to provide high availability and disaster recovery. Stress, performance, backup, and recovery tests were run to validate the reliability and performance of the solution.
IBM Enterprise 2014: Power Systems Technical University - Preliminary AgendaCasey Lucas
The document is a preliminary agenda for the Enterprise2014 conference taking place October 6-10 in Las Vegas. It outlines the schedule of sessions each day, including topics, session types (e.g. lecture, hands-on lab), and presenters. Attendees can access additional information on the conference portal after September 15, including customizing their schedule and downloading presentations.
Virtualization 101: Everything You Need To Know To Get Started With VMwareDatapath Consulting
This document provides an overview of virtualization and VMware's virtualization platform vSphere. It begins with defining virtualization as using software to run multiple virtual machines on a single physical machine, sharing resources to improve utilization. It then discusses VMware's history and role as the market leader in virtualization. The document outlines the key benefits of virtualization such as reducing costs, increasing flexibility and enabling business agility. It provides an overview of vSphere's capabilities to deliver high availability, live migration, storage efficiency and faster disaster recovery. Overall, the document promotes virtualization and vSphere as a way to simplify IT operations and lower costs while increasing business agility.
IBM announced new Power Systems models on February 5, 2013, including significant updates to the Power Systems product line. The announcements included new Power 710, 720, 730, 740, 750, and 760 systems with Power7+ processors offering increased performance, memory, storage, and partitioning capabilities. New Solution Editions of AIX for Cognos and SPSS and IBM i 7.1 technology refresh were also announced.
This document discusses how data is increasingly dominating high performance computing workloads. It notes that while computing power doubles every two years, data storage and movement capabilities are not keeping pace. This is leading to a "data tsunami" as experiments and simulations generate terabytes of data per day. The document then summarizes Sun Microsystems' end-to-end infrastructure for data-centric HPC workflows, including their Lustre parallel storage system, unified storage, tape archives, high performance computing blades, and InfiniBand switches. It positions Sun as uniquely able to deliver an integrated solution from computation to long-term data retention to help users cope with the challenges posed by rapidly growing datasets.
1. Nippon Steel Corporation upgraded the old process computer system for its continuous casting plant to reduce costs by introducing general-purpose PC servers, OS, and middleware originally developed. This was the first time a general purpose RDB was applied to a continuous casting plant.
2. The upgrade aimed to reduce hardware/software costs and application development costs through the use of commercial tools and an original program code generator. An evaluation system was first implemented to test reliability.
3. A spiral development approach was used instead of waterfall, allowing early verification of specifications and improvements based on user feedback, reducing development time and costs compared to the previous system.
This talk/tutorial was one that I delivered to multiple organizations -- ranging from semiconductor houses, to start-up system vendors, to research and academic institutions, back in the 2002 time frame. As the abstract below illustrates, it captures the key essence & principles behind the router designs of two of the most popular and landmark switch/routers in our industry -- the Cisco...
Microsoft Exchange Server 2007 High Availability And Disaster Recovery Deep Diversnarayanan
This document discusses various Exchange Server disaster recovery and high availability solutions such as continuous replication (CCR), standby continuous replication (SCR), local continuous replication (LCR), and single copy cluster (SCC). It provides details on how each solution works, when to use each one, advantages and disadvantages of CCR versus SCC, and basics of how continuous replication functions in Exchange. It also covers topics like transport dumpster redelivery, lost log resilience, and circular logging.
This document summarizes Google's infrastructure for distributed data storage and processing.
It describes Google File System (GFS), which scales to thousands of servers through replication and partitioning files into chunks distributed across servers. GFS uses a master/slave architecture and optimizes for large reads and writes of append-only files.
It also describes MapReduce, Google's programming model for distributed data processing. MapReduce allows parallelization through a map function that processes key-value pairs, followed by an optional combine and a reduce function to aggregate results. This provides fault tolerance and allows computation to move to data.
Converged Data Center: FCoE, iSCSI and the Future of Storage NetworkingEMC
From ( EMCWorld 2011 ) : This session explores the opportunities and challenges of using a single network to support both storage and networking. The Fibre Channel over Ethernet (FCoE) and iSCSI (SCSI over TCP/IP) protocols offer two approaches for supporting storage over Ethernet. Standards, technologies and deployment scenarios for both protocols are covered, along with the future of storage networking technology.
Hadoop 2.0 offers significant HDFS improvements: new append-pipeline, federation, wire compatibility, NameNode HA, performance improvements,
etc. We describe these features and their benefits. We also discuss development that is underway for the next HDFS release. This includes much needed data management features such as Snapshots and Disaster Recovery. We add support for different classes of storage devices such as SSDs and open interfaces such as NFS; together these extend HDFS as a more general storage system. As with every release we will continue improvements to performance, diagnosability and manageability of HDFS.
Windows Server 2012 includes several new and improved networking features for Hyper-V. These features help improve performance and scalability by offloading more processing to the network interface card. New features include improved Receive Side Scaling, Receive Segment Coalescing, Dynamic Virtual Machine Queuing, Single Root I/O Virtualization, and NIC teaming. These features address challenges around availability, reliability, security and reducing complexity for virtualized workloads.
The document summarizes the key differences and improvements between AMD's "Bobcat" and "Jaguar" CPU cores. "Jaguar" features over 10% higher core frequencies, over 15% higher instructions per clock, double the core count from two to four cores, and a larger shared 2MB L2 cache compared to "Bobcat's" 512KB per core cache. "Jaguar" also includes additional instruction sets, a larger 40-bit physical address capability, and a 128-bit floating point unit providing better performance and capabilities than "Bobcat".
The document discusses Java Platform as a Service (PaaS) offerings. It begins by explaining the importance of PaaS and how it provides benefits like increased agility and reduced costs. It then reviews existing Java PaaS options like Google App Engine, Amazon Elastic Beanstalk, and CloudBees. It notes limitations of Google App Engine related to APIs and constraints. It describes Amazon Elastic Beanstalk and CloudBees as offering more flexibility but still relying on underlying infrastructure as a service platforms. The document advocates that the Java virtual machine is well suited for cloud computing due to its ability to manage resources.
This document summarizes new features and enhancements in IBM XIV Storage System R3.4/11.4, which was generally available in November 2013. Key highlights include data-at-rest encryption with KMIP support, expanded scale and flash caching, integration with OpenStack and IBM SmartCloud Storage Access for private clouds, as well as performance optimizations for Microsoft Exchange 2013. Upcoming multi-tenancy support is also noted to help cloud providers accommodate customer security requirements.
The document discusses IBM C9020-971 certification and resources to help pass the exam, including dumps, study guides, practice exams, and demo questions and answers from pass4sures.co. It promotes pass4sures.co as offering the latest exam preparation materials to ensure users pass the C9020-971 exam on their first attempt, with a money back guarantee if they fail. The document provides links to purchase study materials for the C9020-971 certification from pass4sures.co.
The document discusses IBM's XIV storage array. It provides details on:
- XIV's simplicity and ease of use with no tuning required
- XIV's scalability to multi-petabyte capacities through its grid architecture
- New features of XIV including real-time compression, Microsoft Azure integration, and quality of service classes
This document discusses IBM's XIV storage system and its advantages over traditional storage. Some key points:
- XIV provides radical simplicity through its self-optimizing architecture that eliminates hotspots and manual tuning. It also offers enterprise features like snapshots and mirroring at no extra cost.
- Its scale-out architecture allows it to scale capacity and performance non-disruptively. It can also migrate volumes between systems transparently to hosts.
- Benchmark tests show it achieves exceptional performance and price/performance results. Customers report it is easier to deploy and manage than other solutions and provides more predictable performance.
XIV is a grid-based storage system that was acquired by IBM in 2007. It uses a unique architecture that distributes data evenly across all drives to avoid hotspots and ensure balanced performance even as the system scales. Key features like snapshots, replication, and thin provisioning are included at no additional charge. The document provides details on XIV's history, architecture, scalability, and comparison to traditional storage systems.
This document discusses IBM's Spectrum Storage portfolio, which provides software solutions to accelerate, optimize, virtualize and streamline storage environments. It aims to simplify storage management across traditional and new applications, deliver scalability and high performance for analytics and big data, unify data silos, optimize data economics through intelligent tiering, and support hybrid cloud and industry standards. Specific IBM Spectrum products are highlighted, including Spectrum Protect for backup and recovery, Spectrum Control for storage management, Spectrum Archive for active archiving, Spectrum Scale for high performance file and object storage, and Spectrum Virtualize for software defined storage virtualization.
This document summarizes a test of an IBM XIV Gen3 Storage System Model 2810/114 to support 120,000 mailboxes for an Exchange 2010 environment. The solution uses two XIV Gen3 frames with a total usable capacity of 243TB per frame. Each frame is connected to 12 IBM System x3650 servers running Exchange 2010 across two database availability groups (DAGs) of 24 servers each. Databases and their copies are distributed equally across the two storage frames to provide high availability and disaster recovery. Stress, performance, backup, and recovery tests were run to validate the reliability and performance of the solution.
IBM Enterprise 2014: Power Systems Technical University - Preliminary AgendaCasey Lucas
The document is a preliminary agenda for the Enterprise2014 conference taking place October 6-10 in Las Vegas. It outlines the schedule of sessions each day, including topics, session types (e.g. lecture, hands-on lab), and presenters. Attendees can access additional information on the conference portal after September 15, including customizing their schedule and downloading presentations.
Virtualization 101: Everything You Need To Know To Get Started With VMwareDatapath Consulting
This document provides an overview of virtualization and VMware's virtualization platform vSphere. It begins with defining virtualization as using software to run multiple virtual machines on a single physical machine, sharing resources to improve utilization. It then discusses VMware's history and role as the market leader in virtualization. The document outlines the key benefits of virtualization such as reducing costs, increasing flexibility and enabling business agility. It provides an overview of vSphere's capabilities to deliver high availability, live migration, storage efficiency and faster disaster recovery. Overall, the document promotes virtualization and vSphere as a way to simplify IT operations and lower costs while increasing business agility.
This document discusses the benefits of computational storage drives (CSDs) with built-in transparent data compression. CSDs can improve storage efficiency and performance by compressing data inline without software involvement. Three case studies show how CSDs enable new storage optimizations by allowing applications to purposely waste logical storage space, which is recovered through compression. Sparse write-ahead logging and a tableless hash-based key-value store are examples where wasted space improves performance or reduces overhead at no storage cost. CSDs thus open doors for novel storage optimizations by decoupling logical and physical storage utilization.
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
The document discusses a presentation about Ceph on all-flash storage using InfiniFlash systems to break performance barriers. It describes how Ceph has been optimized for flash storage and how InfiniFlash systems provide industry-leading performance of over 1 million IOPS and 6-9GB/s of throughput using SanDisk flash technology. The presentation also covers how InfiniFlash can provide scalable performance and capacity for large-scale enterprise workloads.
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Community
The document discusses performance testing done comparing Ceph running on all-flash storage (InfiniFlash OS) versus stock Ceph storage. Testing showed the InfiniFlash OS implementation of Ceph achieved up to 12x better performance for 8K random read workloads and up to 3x better performance for 64K and 256K random read workloads compared to stock Ceph. Performance also scaled linearly as additional InfiniFlash storage nodes were added. The InfiniFlash OS provides an enterprise-hardened version of Ceph optimized for all-flash storage performance.
The document discusses accelerating Ceph storage performance using SPDK. SPDK introduces optimizations like asynchronous APIs, userspace I/O stacks, and polling mode drivers to reduce software overhead and better utilize fast storage devices. This allows Ceph to better support high performance networks and storage like NVMe SSDs. The document provides an example where SPDK helped XSKY's BlueStore object store achieve significant performance gains over the standard Ceph implementation.
Evaluating the networking performance of linux based home router platforms fo...Alpen-Adria-Universität
This document evaluates the networking performance of Linux-based home router platforms for supporting multimedia services. It discusses several examples of multimedia services that can run on home routers, such as adapting scalable video, serving multimedia content, and sharing social media content. It then evaluates the processing, memory, and networking performance of three representative home router hardware platforms running the openWrt Linux distribution. The evaluation includes benchmarks for CPU and memory performance as well as tests of TCP and UDP throughput. The results show that while home router hardware is much less powerful than desktop CPUs, the platforms are capable of supporting basic multimedia services within the home network.
The document summarizes the architecture and configuration of a large-scale data warehouse implemented at Yahoo using Oracle RAC on IBM x3850 servers. Key aspects included 16-node Oracle RAC with InfiniBand networking, EMC storage, large memory and CPU configurations to support multi-terabyte datasets and high query concurrency. Comprehensive testing was performed to validate performance and scalability requirements.
An introduction and evaluations of a wide area distributed storage systemHiroki Kashiwazaki
A presentation on Storage Developer Conference (SDC) 2014 in Santa Clara, California. General overview of distcloud until now and the future.
米カリフォルニア州サンタクララで開催された Storage Developer Conference 2014 での発表資料です。distcloud のこれまでとこれからの総括。
The Supermicro X12 product line, powered by 3rd Gen Intel® Xeon® Scalable processors, contains many innovations that gives organizations more performance for a variety of workloads.
Join this webinar to learn more about the outstanding performance you can get by using Supermicro X12 servers and storage systems using the latest technologies from Intel®.
Watch the webinar: https://www.brighttalk.com/webcast/17278/514618
The document discusses using flash storage to accelerate application performance. It describes how flash provides faster data transfer rates, IOPS and lower latency compared to HDDs. It outlines different ways flash can be used, including as a host-side PCIe device, array-based caching, or within an all-flash array optimized for flash. The Whiptail storage system is highlighted as providing high throughput, IOPS and endurance while reducing power, space and cooling needs compared to HDD solutions. It can support multiple workloads on a single system.
Microsoft PowerPoint - WirelessCluster_PresVideoguy
This document analyzes delays in unicast video streaming over IEEE 802.11 WLAN networks. It describes conducting an experiment using a testbed with a Darwin Streaming Server and WLAN probe to capture packets. The analysis found that video bitrate variations, packetization scheme, bandwidth load, and frame-based nature of video all impacted mean delay. Bursts of packets from video frames caused per-packet delay to increase in a sawtooth pattern. Increasing uplink load was also found to affect delay variations.
This report was prepared based on my own experiance and for sleflearning , this is report is not to give you or guide to take decision . This benchmark report has been created to high you a rough idea about the Alibaba basic components. Taking this report benchmark for a comparison of other cloud competitors is at own user Risk.
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...Netronome
From the Infra//Structure Conference May 2019 by Ron Renwick of Netronome
Disaggregation a Primer:
Optimizing design for Edge Cloud & Bare Metal applications
Hyperscalers and Edge Cloud providers have recognized economic value of disaggregated infrastructure. Netronome Agilio SmartNICs enable disaggregated architectures to perform with up to 30x lower tail latency while encrypting every session using KTLS security.
Anton Moldovan "Building an efficient replication system for thousands of ter...Fwdays
For one of our projects, we needed to improve the current content delivery system for terminals. In this talk, I will share our experience in building an efficient data replication system for thousands of terminals. We will touch on architecture decisions and tradeoffs, technologies that we used, and a bit of load testing.
Spoiler: We didn't use Kafka.
I/O Consolidation in the Data Center -ExcerptJamie Shoup
This chapter discusses I/O consolidation in data centers. Currently, most data centers use separate Ethernet, Fibre Channel, and Infiniband networks. I/O consolidation aims to use a single physical network for all traffic, reducing costs and complexity. It must meet the requirements of different traffic types, including Ethernet, Fibre Channel storage traffic, and low-latency inter-processor communication. Past attempts at I/O consolidation faced challenges from incompatibility between network types and reliance on gateways. Emerging technologies like 10 Gigabit Ethernet and PCI Express may help overcome these challenges by providing sufficient bandwidth to consolidate traffic on a single network.
Presented approaches for generation of multiple clock gating domain parameterized PVT independent power abstracts for large IP blocks. We accomplish the gating domain parameterization through separation of the attribution of switching due to each single domain through a marking and tracing process, thereby precluding the need for separate domain by domain simulation to achieve the parameterization.
Experimental results comparing proposed approach on IP blocks of varying sizes from a real industry strength microprocessor design clearly highlight accuracy impact while keeping run time and model size increase in an acceptable range. In terms of extensions, we are exploring approaches where we could preserve each of the domains independently, for which we are looking into formulations based on constructing clock gating domain conflict hyper graphs and coloring them to determine domain interactions.
On the Use of Burst Buffers for Accelerating Data-Intensive Scientific WorkflowsRafael Ferreira da Silva
The document discusses using burst buffers to accelerate I/O performance for data-intensive scientific workflows. It finds that burst buffers improved write performance by 9x and read performance by 15x for a cybersecurity workflow. However, performance decreased slightly with more than 64 nodes due to potential I/O bottlenecks. While burst buffers helped, other approaches like in-situ processing may also be needed to meet all application requirements. Future work includes investigating combined in-situ and in-transit analysis and developing a production workflow management system with burst buffer support.
Google and Intel speak on NFV and SFC service delivery
The slides are as presented at the meet up "Out of Box Network Developers" sponsored by Intel Networking Developer Zone
Here is the Agenda of the slides:
How DPDK, RDT and gRPC fit into SDI/SDN, NFV and OpenStack
Key Platform Requirements for SDI
SDI Platform Ingredients: DPDK, IntelⓇRDT
gRPC Service Framework
IntelⓇ RDT and gRPC service framework
Similar to SPC BENCHMARK 2/ENERGY™ EXECUTIVE SUMMARY (20)
This IBM Redpaper provides a brief overview of OpenStack and a basic familiarity of its usage with the IBM XIV Storage System Gen3. The illustration scenario that is presented uses the OpenStack Folsom release implementation IaaS with Ubuntu Linux servers and the IBM Storage Driver for OpenStack. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn how all flash needs end to end Storage efficiency. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about vSphere Storage API for Array Integration on the IBM Storwize family. IBM Storwize V7000 Unified combines the block storage capabilities of Storwize V7000 with file storage capabilities into a single system for greater ease of management and efficiency. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about IBM FlashSystem 840 and its complete product specification in this Redbook. FlashSystem 840 provides scalable performance for the most demanding enterprise class applications. IBM FlashSystem 840 accelerates response times with IBM MicroLatency to enable faster decision making. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about the IBM System x3250 M5,.The x3250 M5 offers the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to a green environment, energy-efficient planar components help lower operational costs. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210746104/IBM-System-x3250-M5
This Redbook talks about the product specification of IBM NeXtScale nx360 M4. The NeXtScale nx360 M4 server provides a dense, flexible solution with a low total cost of ownership (TCO). The half-wide, dual-socket NeXtScale nx360 M4 server is designed for data centers that require high performance but are constrained by floor space. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210745680/IBM-NeXtScale-nx360-M4
The IBM System x3650 M4 HD is a (1) 2-socket 2U rack-optimized server that supports up to 32 internal drives and features an innovative design for optimal performance, uptime, and dense storage. It offers (2) excellent reliability, availability, and serviceability for improved business environments. The server is (3) designed for easy deployment, integration, service, and management.
Here are the product specification for IBM System x3300 M4. This product can be managed remotely.The x3300 M4 server contains IBM IMM2, which provides advanced service-processor control, monitoring, and an alerting function. The IMM2 lights LEDs to help you diagnose the problem, records the error in the event log, and alerts you to the problem. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System x iDataPlex dx360 M4. IBM System x iDataPlex is an innovative data center solution that maximizes performance and optimizes energy and space efficiency. The iDataPlex solution provides customers with outstanding energy and cooling efficiency, multi-rack level manageability, complete flexibility in configuration, and minimal deployment effort. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210744055/IBM-System-x-iDataPlex-dx360-M4
The IBM System x3500 M4 server provides powerful and scalable performance for business applications in an energy efficient tower or rack design. It features the latest Intel Xeon E5-2600 v2 or E5-2600 processors with up to 24 cores, 768GB RAM, 32 hard drives, and 8 PCIe slots. Comprehensive systems management tools and redundant components help ensure high availability, while its small footprint and 80 Plus Platinum power supplies reduce data center costs.
Learn about system specification for IBM System x3550 M4. The x3550 M4 offers numerous features to boost performance, improve scalability, and reduce costs. Improves productivity by offering superior system performance with up to 12-core processors, up to 30 MB of L3 cache, and up to two 8 GT/s QPI interconnect links. For more information on System x, visit http://ibm.co/Q7m3iQ.
Learn about IBM System x3650 M4. The x3650 M4 is an outstanding 2U two-socket business-critical server, offering improved performance and pay-as-you grow flexibility along with new features that improve server management capability. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741926/IBM-System-x3650-M4
Learn about the product specification of IBM System x3500 M3. System x3500 M3 has an energy-efficient design which works in conjunction with the IMM to govern fan rotation based on the readings that it delivers. This saves money under normal conditions because the fans do not have to spin at high speed. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741626/IBM-System-x3500-M3
Learn about IBM System x3400 M3. The x3400 M3 offers numerous features to boost performance and reduce costs, x3400 M3 has the ability to grow with your application requirements with these features. Powerful systems management features simplify local and remote management of the x3400 M3. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System 3250 M3 which is a single-socket server that offers new levels of performance and flexibility
to help you respond quickly to changing business demands. Cost-effective and compact, it is well suited to small to mid-sized businesses, as well as large enterprises. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210740347/IBM-System-x3250-M3
Learn about IBM System x3200 M3 and its specifications. The System x3200 M3 features easy installation and management with a rich set of options for hard disk drives and memory. The efficient design helps to save energy and provide a better work environment with less heat and noise. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210739508/IBM-System-x3200-M3
Learn about the configuration of IBM PowerVC. IBM PowerVC is built on OpenStack that controls large pools of server, storage, and networking resources throughout a data center. IBM Power Virtualization Center provides security services that support a secure environment. Installation requires just 20 minutes to get a virtual machine up and running. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about Ibm POWER7 Virtualization Performance. PowerVM Lx86 is a cross-platform virtualization solution that enables the running of a wide range of x86 Linux applications on Power Systems platforms within a Linux on Power partition without modifications or recompilation of the workloads. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
http://www.scribd.com/doc/210734237/A-Comparison-of-PowerVM-and-Vmware-Virtualization-Performance
This reference architecture document describes deploying the VMware vCloud Enterprise Suite on the IBM PureFlex System hardware platform. Key points:
- The vCloud Suite software provides components for managing and delivering cloud services, while the IBM PureFlex System provides an integrated hardware platform in a single chassis.
- The reference architecture focuses on installing the vCloud Suite management components as virtual machines on an ESXi host to manage consumer resources.
- The IBM PureFlex System provides servers, networking, and storage in a single chassis that can then be easily scaled out. This standardized deployment accelerates provisioning of cloud infrastructure.
- Deployment considerations cover systems management using IBM Flex System Manager, server, networking, storage configurations
Learn how x6: The sixth generation of EXA Technology is fast, agile and Resilient for Emerging Workloads from Alex Yost. Vice President, IBM PureSystems and System x
IBM Systems and Technology Group. x6 drives cloud and big data for enterprises by achieving insight faster thereby outperforming competitors. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210715795/X6-The-sixth-generation-of-EXA-Technology
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
SPC BENCHMARK 2/ENERGY™ EXECUTIVE SUMMARY
1. SPC BENCHMARK 2/ENERGY™
EXECUTIVE SUMMARY
IBM CORPORATION
IBM XIV® STORAGE SYSTEM GEN3
SPC-2/E™ V1.4
Submitted for Review: October 19, 2011
Submission Identifier: BE00001
2. EXECUTIVE SUMMARY Page 2 of 9
EXECUTIVE SUMMARY
Test Sponsor and Contact Information
Test Sponsor and Contact Information
Test Sponsor IBM Corporation – http://www.ibm.com
Primary Contact Bruce McNutt bmcnutt@us.ibm.com
650 Harry Road
San Jose, CA 95120
Phone: (408) 927-2717
FAX: (408) 927-2050
IBM Corporation – http://www.ibm.com
Test Sponsor Yijie Zhang yijie@us.ibm.com
Alternate Contact 9000 Rita Road
IBM Mail Drop 9042-2
Tucson, AZ 85744
Phone: (520) 799-5843
FAX: (520) 799-2009
Storage Performance Council – http://www.storageperformance.org
Auditor Walter E. Baker – AuditService@StoragePerformance.org
643 Bair Island Road, Suite 103
Redwood City, CA 94063
Phone: (650) 556-9384
FAX: (650) 556-9385
Revision Information and Key Dates
Revision Information and Key Dates
SPC-2 Specification revision number V1.4
SPC-2 Workload Generator revision number V1.0
Date Results were first used publicly October 19, 2011
Date FDR was submitted to the SPC October 19, 2011
Date the TSC will be available for shipment to customers October 31, 2011
Date the TSC completed audit certification October 19, 2011
Tested Storage Product (TSP) Description
The IBM XIV® Storage System Gen3 is a versatile, high-end disk storage solution with an
innovative grid architecture that can provide clients excellent performance and scalability
while significantly reducing costs and complexity. XIV includes automated data placement
that needs no tuning as application workloads change.
SPC BENCHMARK 2/ENERGY™ V1.4 EXECUTIVE SUMMARY Submitted for Review: OCTOBER 19, 2011
IBM Corporation Submission Identifier: BE00001
IBM XIV® Storage System Gen3
3. EXECUTIVE SUMMARY Page 3 of 9
SPC-2 Reported Data
SPC-2 Reported Data consists of three groups of information:
The following SPC-2 Primary Metrics, which characterize the overall benchmark
result:
SPC-2 MBPS™
Application Storage Unit (ASU) Capacity
Supplemental data to the SPC-2 Primary Metrics.
Total Price
Data Protection Level
Reported Data for each SPC-2 Test: Large File Processing (LFP), Large Database
Query (LDQ), and Video on Demand Delivery (VOD) Test.
SPC-2 Reported Data
IBM XIV ® Storage System
SPC-2 ASU Capacity Data
SPC-2 MBPS™ Price-Performance (GB) Total Price Protection Level
7,467.99 $152.34 154,618.823 $1,137,641.30 Protected (Mirroring)
The above SPC-2 MBPS™ value represents the aggregate data rate of all three SPC-2 workloads:
Large File Processing (LFP), Large Database Query (LDQ), and Video On Demand (VOD)
SPC-2 Large File Processing (LFP) Reported Data
Data Rate Number of Data Rate
(MB/second) Streams per Stream Price-Performance
LFP Composite 8,259.94 $137.73
Write Only:
1024 KiB Transfer 6,724.35 576 11.67
256 KiB Transfer 6,766.87 576 11.75
Read-Write:
1024 KiB Transfer 8,246.33 576 14.32
256 KiB Transfer 8,197.50 576 14.23
Read Only:
1024 KiB Transfer 9,416.48 576 16.35
256 KiB Transfer 10,208.08 576 17.72
The above SPC-2 Data Rate value for LFP Composite represents the aggregate performance of all three LFP Test
Phases: (Write Only, Read-Write, and Read Only).
SPC-2 Large Database Query (LDQ) Reported Data
Data Rate Number of Data Rate
(MB/second) Streams per Stream Price-Performance
LDQ Composite 9,740.03 $116.80
1024 KiB Transfer Size
4 I/Os Outstanding 9,496.90 576 16.49
1 I/O Outstanding 9,422.03 576 16.36
64 KiB Transfer Size
4 I/Os Outstanding 10,069.26 576 17.48
1 I/O Outstanding 9,971.93 576 17.31
The above SPC-2 Data Rate value for LDQ Composite represents the aggregate performance of the two LDQ
Test Phases: (1024 KiB and 64 KiB Transfer Sizes).
SPC-2 Video On Demand (VOD) Reported Data
Data Rate Number of Data Rate
(MB/second) Streams per Stream Price-Performance
4,404.00 5,600 0.79 $258.32
SPC BENCHMARK 2/ENERGY™ V1.4 EXECUTIVE SUMMARY Submitted for Review: OCTOBER 19, 2011
IBM Corporation Submission Identifier: BE00001
IBM XIV® Storage System Gen3
4. EXECUTIVE SUMMARY Page 4 of 9
SPC-2 MBPS™ represents the aggregate data rate, in megabytes per second, of all three
SPC-2 workloads: Large File Processing (LFP), Large Database Query (LDQ), and Video on
Demand (VOD).
ASU (Application Storage Unit) Capacity represents the total storage capacity read and
written in the course of executing the SPC-2 benchmark.
A Data Protection Level of Protected using Mirroring configures two or more identical
copies of user data.
Storage Capacities and Relationships
The following diagram (not to scale) and table document the various storage capacities,
used in this benchmark, and their relationships, as well as the storage utilization values
required to be reported.
Sparing – 35,457.992 GB
Unused Capacity – 3,179.727 GB
Unused Capacity – 3,352.277 GB
Global Storage Overhead – 5,491.891 GB
Application Storage Unit (ASU) Capacity
154,618.823 GB
Data Protection (Mirroring)
157.971.099 GB
90 SPC-2 Logical Volumes, 1,755.234 GB per volume (includes 3,352.277 GB of Unused Capacity)
Addressable Storage Capacity
157,971.099 GB
Configured Storage Capacity
351,400.190 GB
SPC BENCHMARK 2/ENERGY™ V1.4 EXECUTIVE SUMMARY Submitted for Review: OCTOBER 19, 2011
IBM Corporation Submission Identifier: BE00001
IBM XIV® Storage System Gen3
5. EXECUTIVE SUMMARY Page 5 of 9
SPC-1 Storage Capacity Utilization
Application Utilization 42.94%
Protected Application Utilization 85.55%
Unused Storage Ratio 2.75%
Application Utilization: Total ASU Capacity (154,618.823 GB) divided by Physical
Storage Capacity (360,071.808 GB)
Protected Application Utilization: (Total ASU Capacity (154,618.823 GB) plus total
Data Protection Capacity (157,971.099 GB) minus unused Data Protection Capacity
(3,352.277 GB) divided by Physical Storage Capacity (360,071.808 GB).
Unused Storage Ratio: Total Unused Capacity (9,884.280 GB) divided by Physical
Storage Capacity (360,071.808 GB) and may not exceed 45%.
Detailed information for the various storage capacities and utilizations is available on
pages 22-23 in the Full Disclosure Report.
Differences between the Tested Storage Configuration (TSC) and Priced
Storage Configuration
There were no differences between the TSC and Priced Storage Configuration.
SPC BENCHMARK 2/ENERGY™ V1.4 EXECUTIVE SUMMARY Submitted for Review: OCTOBER 19, 2011
IBM Corporation Submission Identifier: BE00001
IBM XIV® Storage System Gen3
6. EXECUTIVE SUMMARY Page 6 of 9
SPC-2/E Reported Data
The initial temperature, recorded during the first one minute of the SPC-2/E Idle Test was
72F. The final temperature, recorded during the last one minute of the SPC-2/E Large
Database Query (LDQ) Test was 67F.
Power Environment
Average RMS Voltage: 209.83 Average Power Factor: 0.978
Usage Profile Nominal
Hours of Use per Day Power Traffic Ratio Heat
Heavy Moderate Idle watts MBPS MBPS/w BTU/hr
Low Daily Usage: 0 8 16 6049.33 2185.53 0.36 20,640.91
Medium Daily Usage: 4 14 6 6134.88 5129.80 0.84 20,932.84
High Daily Usage: 18 6 0 6209.13 7512.21 1.21 21,186.16
Composite Metrics: 6,131.11 4,942.51 0.81
Annual Energy Use, kWh: 53,708.55
Energy Cost, $/kWh: $ 0.12 Annual Energy Cost, $: $ 6,445.03
HEAVY SPC-2 Workload: 6,220.98W at a data rate of 7,830.75 MB/s.
MODERATE SPC-2 Workload: 6,556.58W at a data rate of 6,556.58 MB/s
IDLE SPC-2 Workload: 5,987.20W at data rate of zero (0).
The above usage profile describes conditions in environments that respectively impose light
(Low Daily Usage), moderate (Medium Daily Usage), and extensive (High Daily Usage) demands
on the Tested Storage Configuration (TSC). The data in this profile represents the
combined results of all three SPC-2 workloads: Large File Processing (LFP), Large
Database Query (LDQ) and Video on Demand Delivery (VOD).
The detailed SPC-2/E Reported Data and associated charts for each workload, including the
Idle Test, are available in the associated SPC-2/E Full Disclosure Report (FDR), in the
sections of that document, which are listed below:
SPC-2/E Idle Test chart
SPC-2/E Large File Processing (LFP) Reported Data table and associated charts
SPC-2/E Large Database Query (LDQ) Reported Data table and associated charts
SPC-2/E Video on Demand Delivery (VOD) Reported Data table and associated charts
The definitions, listed below, for the remaining items in the above SPC-2/E Reported Data
table, are identical for the SPC-2/E Reported Data tables for each of the three individual
SPC-2 workloads: LFP, LDQ and VOD.
AVERAGE RMS VOLTAGE: The average supply voltage applied to the Tested Storage
Product (TSP) as measured during the Measurement Intervals of the SPC-2 Tests.
SPC BENCHMARK 2/ENERGY™ V1.4 EXECUTIVE SUMMARY Submitted for Review: OCTOBER 19, 2011
IBM Corporation Submission Identifier: BE00001
IBM XIV® Storage System Gen3
7. EXECUTIVE SUMMARY Page 7 of 9
AVERAGE POWER FACTOR: The ratio of average real power, in watts, to the average
apparent power, in volt-amps flowing into the Tested Storage Product (TSP) during the
Measurement Intervals of the SPC-2 Tests.
NOMINAL POWER, W: The average power consumption over the course of a day (24
hours), taking into account hourly load variations.
NOMINAL TRAFFIC, MBPS: The average data rate over the course of a day (24 hours),
taking into account hourly load variations.
NOMINAL MBPS/W: The overall efficiency with which the reported data rate can be
supported, reflected by the ratio of NOMINAL TRAFFIC versus the NOMINAL POWER.
NOMINAL HEAT, BTU/HR: The average amount of heat required to be dissipated over the
course of a day (24 hours), taking into account hourly load variations. (1 watt = 3.412
BTU/hr)
COMPOSITE METRICS: The aggregated NOMINAL POWER, NOMINAL TRAFFIC, and
NOMINAL MBPS/W for all three environments: LOW, MEDIUM, and HIGH DAILY USAGE.
ANNUAL ENERGY USE, KWH: An estimate of the average energy use across the three
environments over the course of a year and computed as (NOMINAL POWER * 24 *0.365).
ENERGY COST, $/KWH: A standardized energy cost per kilowatt hour.
ANNUAL ENERGY COST: An estimate of the annual energy use across the three
environments over the course of a year and computed as (ANNUAL ENERGY USE * ENERGY
COST).
SPC BENCHMARK 2/ENERGY™ V1.4 EXECUTIVE SUMMARY Submitted for Review: OCTOBER 19, 2011
IBM Corporation Submission Identifier: BE00001
IBM XIV® Storage System Gen3
8. EXECUTIVE SUMMARY Page 8 of 9
Priced Storage Configuration Pricing
Extended Price
Description Qty Unit Price Discount with discount
IBM XIV Storage System Model GEN3 1 $183,195.00 70.00% $54,958.50
2TB Interface Module w/12 2 TB drives 6 $122,430.00 70.00% $220,374.00
2TB Data Module w/12 2 TB drives 9 $122,430.00 70.00% $330,561.00
Modem 1 $1,000.00 70.00% $300.00
US/CA/LA/AP 60A pin cord 1 $3,000.00 70.00% $900.00
Module Software License 15 $41,800.00 60.00% $250,800.00
Module Software Maintenance (3 years) 15 $16,720.00 60.00% $100,320.00
Monthly maintenance (XIV hardware) 36 $8,511.00 70.00% $91,918.80
8 Gbps FC switch w/24 port active, 24 SFPs 2 $12,870.00 20.00% $20,592.00
3 year warranty extension (switch) 2 $2,330.00 20.00% $3,728.00
Short wave 25 m fibre channel cable 36 $189.00 20.00% $5,443.20
HBA (dual port 8 Gbps FC) 18 $4,583.00 30.00% $57,745.80
Total price $1,137,641.30
The following pricing includes the following:
Acknowledgement of new and existing hardware and/or software problems within
four hours.
Onsite presence of a qualified maintenance engineer or provision of a customer
replaceable part within four hours of the above acknowledgement for any hardware
failure that results in an inoperative Priced Storage Configuration component.
Differences between the Tested Storage Configuration (TSC) and Priced
Storage Configuration
There were no differences between the TSC and Priced Storage Configuration.
SPC BENCHMARK 2/ENERGY™ V1.4 EXECUTIVE SUMMARY Submitted for Review: OCTOBER 19, 2011
IBM Corporation Submission Identifier: BE00001
IBM XIV® Storage System Gen3
9. EXECUTIVE SUMMARY Page 9 of 9
Priced Storage Configuration Diagram
IBM XIV® Storage System
6 – 2TB Interface Modules
9 – 2TB Data Modules
(12 – disk drives/module)
180 – 2 TB 7200 RPM SAS disk drives
8 – dual-port 8 Gbps FC HBAs
8 – 8 Gbps connections
8 – dual-port 8 Gbps FC HBAs
8 – 8 Gbps connections
2 – IBM 2498-B24
FC 8 Gbps Switches 2 – dual-port 8 Gbps FC HBAs
2 – 8 Gbps connections
Priced Configuration Components
Priced Storage Configuration
18 – dual-port 8 Gbps FC HBAs
IBM XIV® Storage System
360 GiB memory/cache
6 – 2 TB Interface Modules
9 – 2 TB Data Modules
24 – 8 Gbps FC front-end connections (18 used)
30 – 4x6 Gbps SAS backend connections (30 used)
180 – 2 TB 7200 RPM SAS disk drive
(12 per interface and data module)
2 – IBM 2498-B24 FC 8Gbps switches
36 – Short Wave 25m fibre channel cables
SPC BENCHMARK 2/ENERGY™ V1.4 EXECUTIVE SUMMARY Submitted for Review: OCTOBER 19, 2011
IBM Corporation Submission Identifier: BE00001
IBM XIV® Storage System Gen3