Shared the stage with Kevin Kline. Paul Randal and Kimberly L. Tripp organized an excellent conference. This slide deck talks about how to design large MS SQL Server architectures with 1000s of databases that are high performance and yet easy to manage. ioMemory by Fusion-io provides performance and SQL Sentry provides an amazing interface to manage and monitor 1000s of databases.
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
The document summarizes a presentation given by representatives from various companies on optimizing Ceph for high-performance solid state drives. It discusses testing a real workload on a Ceph cluster with 50 SSD nodes that achieved over 280,000 read and write IOPS. Areas for further optimization were identified, such as reducing latency spikes and improving single-threaded performance. Various companies then described their contributions to Ceph performance, such as Intel providing hardware for testing and Samsung discussing SSD interface improvements.
High Availability Options for Modern Oracle InfrastructuresSimon Haslam
Today's enterprise architect has a bewildering array of choices when it comes to building a highly available infrastructure to run Oracle. This presentation considers approaches using the Oracle technology layer, resilient virtualisation (Oracle and other vendors), hardware clustering and storage replication. It covers the core Oracle Database and Fusion Middleware products and, based on practical experience, aims to give attendees a broad picture of alternatives with their pros and cons.
Delivered on 5 December 2011 at UKOUG 2011 by Simon Haslam and Julian Dyke.
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
This document summarizes the performance of an all-NVMe Ceph cluster using Intel P3700 NVMe SSDs. Key results include achieving over 1.35 million 4K random read IOPS and 171K 4K random write IOPS with sub-millisecond latency. Partitioning the NVMe drives into multiple OSDs improved performance and CPU utilization compared to a single OSD per drive. The cluster also demonstrated over 5GB/s of sequential bandwidth.
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Community
This document discusses recovery, erasure coding, and cache tiering in Ceph. It provides an overview of the RADOS components including OSDs, monitors, and CRUSH, which calculates data placement across the cluster. It describes how peering and recovery work to maintain data consistency. It also outlines how Ceph implements tiered storage with cache and backing pools, using erasure coding for durability and caching techniques to improve performance.
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
This document discusses various options for deploying solid state drives (SSDs) in the data center to address storage performance issues. It describes all-flash arrays that use only SSDs, hybrid arrays that combine SSDs and hard disk drives, and server-side flash caching. Key points covered include the performance benefits of SSDs over HDDs, different types of SSDs, form factors, deployment architectures like all-flash arrays from vendors, hybrid arrays, server-side caching software, virtual storage appliances, and hyperconverged infrastructure systems. Choosing the best solution depends on factors like performance needs, capacity, data services required, and budget.
Intel(R) Xeon(R) E7 v3-based X6 platforms + Lenovo Flex System Interconnect Fabric solutions deliver a highly-reliable, cost-efficient and scalable system for your data center.
This document provides an overview and demonstration of EnterpriseDB's Failover Manager (EFM). It begins with an overview of EFM's capabilities in ensuring high availability and minimizing downtime during database upgrades or maintenance. It then covers installation and configuration prerequisites, supported platforms, and the EFM architecture involving a primary, standby, and witness database nodes. The remainder demonstrates switchover and failover functionality through a live demo in a replication environment using CentOS 7.7 and EnterpriseDB PostgreSQL Advanced Server 13.
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
The document summarizes a presentation given by representatives from various companies on optimizing Ceph for high-performance solid state drives. It discusses testing a real workload on a Ceph cluster with 50 SSD nodes that achieved over 280,000 read and write IOPS. Areas for further optimization were identified, such as reducing latency spikes and improving single-threaded performance. Various companies then described their contributions to Ceph performance, such as Intel providing hardware for testing and Samsung discussing SSD interface improvements.
High Availability Options for Modern Oracle InfrastructuresSimon Haslam
Today's enterprise architect has a bewildering array of choices when it comes to building a highly available infrastructure to run Oracle. This presentation considers approaches using the Oracle technology layer, resilient virtualisation (Oracle and other vendors), hardware clustering and storage replication. It covers the core Oracle Database and Fusion Middleware products and, based on practical experience, aims to give attendees a broad picture of alternatives with their pros and cons.
Delivered on 5 December 2011 at UKOUG 2011 by Simon Haslam and Julian Dyke.
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
This document summarizes the performance of an all-NVMe Ceph cluster using Intel P3700 NVMe SSDs. Key results include achieving over 1.35 million 4K random read IOPS and 171K 4K random write IOPS with sub-millisecond latency. Partitioning the NVMe drives into multiple OSDs improved performance and CPU utilization compared to a single OSD per drive. The cluster also demonstrated over 5GB/s of sequential bandwidth.
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Community
This document discusses recovery, erasure coding, and cache tiering in Ceph. It provides an overview of the RADOS components including OSDs, monitors, and CRUSH, which calculates data placement across the cluster. It describes how peering and recovery work to maintain data consistency. It also outlines how Ceph implements tiered storage with cache and backing pools, using erasure coding for durability and caching techniques to improve performance.
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
This document discusses various options for deploying solid state drives (SSDs) in the data center to address storage performance issues. It describes all-flash arrays that use only SSDs, hybrid arrays that combine SSDs and hard disk drives, and server-side flash caching. Key points covered include the performance benefits of SSDs over HDDs, different types of SSDs, form factors, deployment architectures like all-flash arrays from vendors, hybrid arrays, server-side caching software, virtual storage appliances, and hyperconverged infrastructure systems. Choosing the best solution depends on factors like performance needs, capacity, data services required, and budget.
Intel(R) Xeon(R) E7 v3-based X6 platforms + Lenovo Flex System Interconnect Fabric solutions deliver a highly-reliable, cost-efficient and scalable system for your data center.
This document provides an overview and demonstration of EnterpriseDB's Failover Manager (EFM). It begins with an overview of EFM's capabilities in ensuring high availability and minimizing downtime during database upgrades or maintenance. It then covers installation and configuration prerequisites, supported platforms, and the EFM architecture involving a primary, standby, and witness database nodes. The remainder demonstrates switchover and failover functionality through a live demo in a replication environment using CentOS 7.7 and EnterpriseDB PostgreSQL Advanced Server 13.
Learn how upcoming changes in the persistent memory market will affect deployments of in-memory computing and traditional applications. Using software innovations from SanDisk and the broad portfolio of flash storage hardware options, customers and developers can optimize applications for “flash extended memory”, the intersection of in-memory computing and persistent memory technologies.
This document provides an overview of Exadata patching. It discusses that patching has improved over time. Oracle will patch Exadata systems for customers with support contracts. Exadata patches are applied using patchmgr and involve pushing new OS images to storage cells which reboot multiple times. Database servers are patched using yum. Quarterly database patches contain RDBMS, CRS, and Diskmon patches applied together using opatch. It is important to test patches in non-production first and have a patching plan.
Exadata has been around since 2008 and the software features are being enhanced each release. This Presentation talks about the 12.1.x.x series of Software updates and some of the things you can now do with Exadata
Whitepaper: Running Oracle e-Business Suite Database on Oracle Database Appli...Maris Elsins
This is the whitepaper for my Collaborate 13 presentation with the same title. It describes how Pythian completed a migration project of eBS R12 database top ODA (Oracle Appliance Kit v2.2).
Storage and performance- Batch processing, WhiptailInternet World
Batch processing allows jobs to run without manual intervention by shifting processing to less busy times. It avoids idling computing resources and allows higher overall utilization. Batch processing provides benefits like prioritizing batch and interactive work. The document then discusses different approaches to batch processing like dedicating all resources to it or sharing resources. It outlines challenges like systems being unavailable during batch processing. The rest of the document summarizes Whiptail's flash storage solutions for accelerating workloads and reducing costs and resources compared to HDDs.
Oracle Database Appliance - RAC in a box Some strings attached Fuad Arshad
The document discusses the deployment of an Oracle Database Appliance (ODA). It begins by describing the components of the ODA, including its two server nodes and storage configuration. It then discusses the important predeployment tasks, such as cabling and collecting networking information. The main sections cover deploying the Integrated Lights Out Manager (ILOM) via serial port, and then using the ILOM to configure the network settings and power on the database nodes for deployment.
The document discusses 3PAR storage solutions and their benefits for virtualized environments using VMware. 3PAR offers thin provisioning, large volume sizes, and fine-grained virtualization which help address issues with ESX servers like random I/O stresses, time-consuming management as servers consolidate, and preference for large storage volumes. 3PAR solutions provide benefits like reduced storage administration, increased capacity utilization, and support for high server consolidation ratios.
1. SQL Server performance on VMware was tested and found to achieve equivalent or better performance than physical hardware, with 10x better disk subsystem I/O performance.
2. Critical SQL performance counters were monitored and maintained acceptable levels.
3. Consolidating multiple SQL VMs onto fewer physical servers through virtualization can save significant costs on hardware, space, power and cooling while providing high availability and disaster recovery.
Moving to PCI Express based SSD with NVM ExpressOdinot Stanislas
Une très bonne présentation qui introduit la technologie NVM Express qui sera à coup sure l'interface du futur (proche) des "disques" SSD. Adieu SAS et SATA, bienvenu au PCI Express dans les serveurs (et postes clients)
Using preferred read groups in oracle asm michael aultLouis liu
This document describes an optimized Oracle database architecture that leverages Automatic Storage Management (ASM) and Preferred Read Groups (PRG) to maximize performance while maintaining reliability and controlling costs. It uses solid state disks (SSDs) mirrored with traditional disks in ASM to provide fast reads from SSDs without sacrificing redundancy. Benchmark results show this architecture completes the same workload over 12 times faster than an all-disk configuration by serving reads from SSDs through the ASM preferred read feature.
UKOUG Tech15 - Deploying Oracle 12c Cloud Control in Maximum Availability Arc...Zahid Anwar (OCM)
Common Cloud Control deployments can sometimes be exposed to single points of failure. In this presentation we will be discussing these pitfalls and how, through deploying Cloud Control within the Maximum Availability Architecture can provide a robust system. Aimed at a technical audience - we will dive into giving High Availability and Disaster Recovery for the OMS repository and OMS Web Tier through the use of RAC, Web Tier Clustering, Data Guard and Storage Replication. We will take our audience through the simple but effective steps required for this type of deployment in addition to the license implications of using Maximum Availability Architecture including what Oracle give you for free under a restricted-use license. This presentation is based on a recent project completed by our speaker Zahid Anwar. This project saw Zahid provide Maximum Availability Architecture for Cloud Control which was monitoring 6, critical X4-2 Eighth Exadata Machines.
Yesterday's thinking may still believe NVMe (NVM Express) is in transition to a production ready solution. In this session, we will discuss how the evolution of NVMe is ready for production, the history and evolution of NVMe and the Linux stack to address where NVMe has progressed today to become the low latency, highly reliable database key value store mechanism that will drive the future of cloud expansion. Examples of protocol efficiencies and types of storage engines that are optimizing for NVMe will be discussed. Please join us for an exciting session where in-memory computing and persistence have evolved.
In this session we want to explore the various ways you can setup a connection strategy. We'll start with Oracle's UCP (Universal Connection Pool), its architecture and most notably, how do you size it? We'll discuss important concepts such as: connection reservation, and the distinction between connection, process and session.
Besides UCP there are: Database Resident Connection Pool (DRCP) and Proxy Resident Connection Pool (PRCP). Which will both be discussed. We'll also look into combining different types of pools: what are their typical use-cases, and what are the pitfalls?
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergenceinside-BigData.com
In this deck, Johann Lombardi from Intel presents: DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence.
"Intel has been building an entirely open source software ecosystem for data-centric computing, fully optimized for Intel® architecture and non-volatile memory (NVM) technologies, including Intel Optane DC persistent memory and Intel Optane DC SSDs. Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel exascale storage stack. DAOS is an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI."
Unlike traditional storage stacks that were primarily designed for rotating media, DAOS is architected from the ground up to make use of new NVM technologies, and it is extremely lightweight because it operates end-to-end in user space with full operating system bypass. DAOS offers a shift away from an I/O model designed for block-based, high-latency storage to one that inherently supports fine- grained data access and unlocks the performance of next- generation storage technologies.
Watch the video: https://youtu.be/wnGBW31yhLM
Learn more: https://www.intel.com/content/www/us/en/high-performance-computing/daos-high-performance-storage-brief.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
This presents the CephFS performance scalability and evaluation results. Specifically, it addresses some technical issues such as multi core scalability, cache size, static pinning, recovery, and QoS.
Running E-Business Suite Database on Oracle Database ApplianceMaris Elsins
This is my Collaborate 13 presentation.
ODA is a pre-configured, simple setup, high performance engineered system running 11gR2 cluster. It is a great choice for small to medium sized DBs and if you wish it can be used for Oracle EBS DB too. This paper will show you how the standardized configuration of ODA can be adjusted to comply with the specific requirements of e-Business Suite without sacrificing ODA’s flexibility and supportability. The paper will also share author’s experience migrating, running and maintaining R12 database tier on ODA.
The document discusses Oracle's Zero Data Loss Recovery Appliance. It aims to fundamentally change how databases are protected by pushing database changes in real-time instead of periodic backups. This minimizes impact on production databases and ensures zero data loss. It stores database changes efficiently on disk and can restore databases to any point in time using these deltas. It also creates space-efficient "virtual" full backups without requiring full backups. This enables long retention of backup history with minimal storage.
Simplifying Ceph Management with Virtual Storage Manager (VSM)Ceph Community
VSM (Virtual Storage Manager) is an open source tool developed by Intel to simplify Ceph storage cluster management. It includes a controller that runs on a dedicated server and manages Ceph through agents on each Ceph node. The VSM makes it easier to deploy, maintain, and monitor Ceph clusters, and also integrates with OpenStack for storage orchestration.
This technical presentation shows you the best practices with EDB Postgres tools, that are designed to make database administration easier and more efficient:
● Tune a new database using Postgres Expert
● Set up streaming replication in EDB Postgres Enterprise Manager (PEM)
● Create a backup schedule in EDB Postgres Backup and Recovery
● Automatically failover with EDB Postgres Failover Manager
● Use SQL Profiler and Index Advisor to add indexes
The presentation also included a demonstration. To access the recording visit www.enterprisedb.com and access the webcast recordings section or email info@enterprisedb.com.
This document discusses applications that can experience performance issues when virtualized due to expensive address translation costs. It describes how virtual machines require an additional level of memory virtualization that introduces shadow page tables or nested page tables to map guest virtual addresses to machine memory. While hardware-assisted virtualization reduces exit frequencies and overhead compared to software address translation, it also makes the translation lookup more expensive due to deeper page table walks. In rare cases with very poor memory locality and high translation miss rates, the cycle costs of the two-level address translation can significantly degrade application performance when virtualized.
Learn how upcoming changes in the persistent memory market will affect deployments of in-memory computing and traditional applications. Using software innovations from SanDisk and the broad portfolio of flash storage hardware options, customers and developers can optimize applications for “flash extended memory”, the intersection of in-memory computing and persistent memory technologies.
This document provides an overview of Exadata patching. It discusses that patching has improved over time. Oracle will patch Exadata systems for customers with support contracts. Exadata patches are applied using patchmgr and involve pushing new OS images to storage cells which reboot multiple times. Database servers are patched using yum. Quarterly database patches contain RDBMS, CRS, and Diskmon patches applied together using opatch. It is important to test patches in non-production first and have a patching plan.
Exadata has been around since 2008 and the software features are being enhanced each release. This Presentation talks about the 12.1.x.x series of Software updates and some of the things you can now do with Exadata
Whitepaper: Running Oracle e-Business Suite Database on Oracle Database Appli...Maris Elsins
This is the whitepaper for my Collaborate 13 presentation with the same title. It describes how Pythian completed a migration project of eBS R12 database top ODA (Oracle Appliance Kit v2.2).
Storage and performance- Batch processing, WhiptailInternet World
Batch processing allows jobs to run without manual intervention by shifting processing to less busy times. It avoids idling computing resources and allows higher overall utilization. Batch processing provides benefits like prioritizing batch and interactive work. The document then discusses different approaches to batch processing like dedicating all resources to it or sharing resources. It outlines challenges like systems being unavailable during batch processing. The rest of the document summarizes Whiptail's flash storage solutions for accelerating workloads and reducing costs and resources compared to HDDs.
Oracle Database Appliance - RAC in a box Some strings attached Fuad Arshad
The document discusses the deployment of an Oracle Database Appliance (ODA). It begins by describing the components of the ODA, including its two server nodes and storage configuration. It then discusses the important predeployment tasks, such as cabling and collecting networking information. The main sections cover deploying the Integrated Lights Out Manager (ILOM) via serial port, and then using the ILOM to configure the network settings and power on the database nodes for deployment.
The document discusses 3PAR storage solutions and their benefits for virtualized environments using VMware. 3PAR offers thin provisioning, large volume sizes, and fine-grained virtualization which help address issues with ESX servers like random I/O stresses, time-consuming management as servers consolidate, and preference for large storage volumes. 3PAR solutions provide benefits like reduced storage administration, increased capacity utilization, and support for high server consolidation ratios.
1. SQL Server performance on VMware was tested and found to achieve equivalent or better performance than physical hardware, with 10x better disk subsystem I/O performance.
2. Critical SQL performance counters were monitored and maintained acceptable levels.
3. Consolidating multiple SQL VMs onto fewer physical servers through virtualization can save significant costs on hardware, space, power and cooling while providing high availability and disaster recovery.
Moving to PCI Express based SSD with NVM ExpressOdinot Stanislas
Une très bonne présentation qui introduit la technologie NVM Express qui sera à coup sure l'interface du futur (proche) des "disques" SSD. Adieu SAS et SATA, bienvenu au PCI Express dans les serveurs (et postes clients)
Using preferred read groups in oracle asm michael aultLouis liu
This document describes an optimized Oracle database architecture that leverages Automatic Storage Management (ASM) and Preferred Read Groups (PRG) to maximize performance while maintaining reliability and controlling costs. It uses solid state disks (SSDs) mirrored with traditional disks in ASM to provide fast reads from SSDs without sacrificing redundancy. Benchmark results show this architecture completes the same workload over 12 times faster than an all-disk configuration by serving reads from SSDs through the ASM preferred read feature.
UKOUG Tech15 - Deploying Oracle 12c Cloud Control in Maximum Availability Arc...Zahid Anwar (OCM)
Common Cloud Control deployments can sometimes be exposed to single points of failure. In this presentation we will be discussing these pitfalls and how, through deploying Cloud Control within the Maximum Availability Architecture can provide a robust system. Aimed at a technical audience - we will dive into giving High Availability and Disaster Recovery for the OMS repository and OMS Web Tier through the use of RAC, Web Tier Clustering, Data Guard and Storage Replication. We will take our audience through the simple but effective steps required for this type of deployment in addition to the license implications of using Maximum Availability Architecture including what Oracle give you for free under a restricted-use license. This presentation is based on a recent project completed by our speaker Zahid Anwar. This project saw Zahid provide Maximum Availability Architecture for Cloud Control which was monitoring 6, critical X4-2 Eighth Exadata Machines.
Yesterday's thinking may still believe NVMe (NVM Express) is in transition to a production ready solution. In this session, we will discuss how the evolution of NVMe is ready for production, the history and evolution of NVMe and the Linux stack to address where NVMe has progressed today to become the low latency, highly reliable database key value store mechanism that will drive the future of cloud expansion. Examples of protocol efficiencies and types of storage engines that are optimizing for NVMe will be discussed. Please join us for an exciting session where in-memory computing and persistence have evolved.
In this session we want to explore the various ways you can setup a connection strategy. We'll start with Oracle's UCP (Universal Connection Pool), its architecture and most notably, how do you size it? We'll discuss important concepts such as: connection reservation, and the distinction between connection, process and session.
Besides UCP there are: Database Resident Connection Pool (DRCP) and Proxy Resident Connection Pool (PRCP). Which will both be discussed. We'll also look into combining different types of pools: what are their typical use-cases, and what are the pitfalls?
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergenceinside-BigData.com
In this deck, Johann Lombardi from Intel presents: DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence.
"Intel has been building an entirely open source software ecosystem for data-centric computing, fully optimized for Intel® architecture and non-volatile memory (NVM) technologies, including Intel Optane DC persistent memory and Intel Optane DC SSDs. Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel exascale storage stack. DAOS is an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI."
Unlike traditional storage stacks that were primarily designed for rotating media, DAOS is architected from the ground up to make use of new NVM technologies, and it is extremely lightweight because it operates end-to-end in user space with full operating system bypass. DAOS offers a shift away from an I/O model designed for block-based, high-latency storage to one that inherently supports fine- grained data access and unlocks the performance of next- generation storage technologies.
Watch the video: https://youtu.be/wnGBW31yhLM
Learn more: https://www.intel.com/content/www/us/en/high-performance-computing/daos-high-performance-storage-brief.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
This presents the CephFS performance scalability and evaluation results. Specifically, it addresses some technical issues such as multi core scalability, cache size, static pinning, recovery, and QoS.
Running E-Business Suite Database on Oracle Database ApplianceMaris Elsins
This is my Collaborate 13 presentation.
ODA is a pre-configured, simple setup, high performance engineered system running 11gR2 cluster. It is a great choice for small to medium sized DBs and if you wish it can be used for Oracle EBS DB too. This paper will show you how the standardized configuration of ODA can be adjusted to comply with the specific requirements of e-Business Suite without sacrificing ODA’s flexibility and supportability. The paper will also share author’s experience migrating, running and maintaining R12 database tier on ODA.
The document discusses Oracle's Zero Data Loss Recovery Appliance. It aims to fundamentally change how databases are protected by pushing database changes in real-time instead of periodic backups. This minimizes impact on production databases and ensures zero data loss. It stores database changes efficiently on disk and can restore databases to any point in time using these deltas. It also creates space-efficient "virtual" full backups without requiring full backups. This enables long retention of backup history with minimal storage.
Simplifying Ceph Management with Virtual Storage Manager (VSM)Ceph Community
VSM (Virtual Storage Manager) is an open source tool developed by Intel to simplify Ceph storage cluster management. It includes a controller that runs on a dedicated server and manages Ceph through agents on each Ceph node. The VSM makes it easier to deploy, maintain, and monitor Ceph clusters, and also integrates with OpenStack for storage orchestration.
This technical presentation shows you the best practices with EDB Postgres tools, that are designed to make database administration easier and more efficient:
● Tune a new database using Postgres Expert
● Set up streaming replication in EDB Postgres Enterprise Manager (PEM)
● Create a backup schedule in EDB Postgres Backup and Recovery
● Automatically failover with EDB Postgres Failover Manager
● Use SQL Profiler and Index Advisor to add indexes
The presentation also included a demonstration. To access the recording visit www.enterprisedb.com and access the webcast recordings section or email info@enterprisedb.com.
This document discusses applications that can experience performance issues when virtualized due to expensive address translation costs. It describes how virtual machines require an additional level of memory virtualization that introduces shadow page tables or nested page tables to map guest virtual addresses to machine memory. While hardware-assisted virtualization reduces exit frequencies and overhead compared to software address translation, it also makes the translation lookup more expensive due to deeper page table walks. In rare cases with very poor memory locality and high translation miss rates, the cycle costs of the two-level address translation can significantly degrade application performance when virtualized.
The document discusses accelerating Ceph storage performance using SPDK. SPDK introduces optimizations like asynchronous APIs, userspace I/O stacks, and polling mode drivers to reduce software overhead and better utilize fast storage devices. This allows Ceph to better support high performance networks and storage like NVMe SSDs. The document provides an example where SPDK helped XSKY's BlueStore object store achieve significant performance gains over the standard Ceph implementation.
The document discusses Ceph performance on all-flash storage systems. It describes optimizations made to Ceph's OSD architecture and write path that have led to significant performance improvements when deployed on SanDisk's InfiniFlash all-flash storage. These include reducing CPU utilization and improving throughput and latency. Example performance metrics are provided showing random read IOPS over 1.5M and latency under 5ms for most operations. The document also outlines the InfiniFlash hardware architecture and roadmap for further Ceph optimizations including new storage backends like BlueStore.
The document discusses Ceph storage performance on all-flash storage systems. It describes how SanDisk optimized Ceph for all-flash environments by tuning the OSD to handle the high performance of flash drives. The optimizations allowed over 200,000 IOPS per OSD using 12 CPU cores. Testing on SanDisk's InfiniFlash storage system showed it achieving over 1.5 million random read IOPS and 200,000 random write IOPS at 64KB block size. Latency was also very low, with 99% of operations under 5ms for reads. The document outlines reference configurations for the InfiniFlash system optimized for small, medium and large workloads.
The document discusses Ceph storage performance on all-flash storage systems. It notes that Ceph was originally optimized for HDDs and required tuning and algorithm changes to achieve flash-level performance. SanDisk worked with the Ceph community to optimize the object storage daemon (OSD) for flash, improving read and write throughput. Benchmark results show the SanDisk InfiniFlash system delivering over 1 million IOPS and 15GB/s throughput using Ceph software. Reference configurations provide guidance on hardware requirements for small, medium, and large workloads.
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
The document discusses a presentation about Ceph on all-flash storage using InfiniFlash systems to break performance barriers. It describes how Ceph has been optimized for flash storage and how InfiniFlash systems provide industry-leading performance of over 1 million IOPS and 6-9GB/s of throughput using SanDisk flash technology. The presentation also covers how InfiniFlash can provide scalable performance and capacity for large-scale enterprise workloads.
Inter connect2016 yps-2749_02232016_aspresentedBruce Semple
Turbo LAMP is a collaboration between IBM, Canonical, Zend, MariaDB, and Mellanox to optimize the LAMP stack (Linux, Apache, MySQL, PHP) for performance on IBM Power Systems. The partners worked to modernize and optimize the open source LAMP platform for IBM's POWER8 architecture. This provides faster and more efficient support for popular applications built on LAMP stacks, such as Magento, Drupal, SugarCRM, and WordPress. It also enables faster ROI by allowing clients and managed service providers to support more users and generate more revenue using fewer system resources.
Presentation & discussion around low-level graphics APIs. This was a quickly made presentation that I put together for a discussion with Intel and fellow ISVs, thought it could be worth sharing
Ben Prusinski is presenting on Oracle R12 E-Business Suite performance tuning. He will cover methodology, best practices, and techniques from basic to advanced. The presentation includes tuning at the infrastructure, application, and database levels with a focus on a holistic approach. Specific areas that will be discussed are concurrent manager tuning including queue size, sleep cycle, cache size, and number of processes.
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, and techniques for configuring and optimizing all-flash Ceph performance.
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
SK Telecom is optimizing Ceph for all-flash storage to improve performance and efficiency. Recent work includes enhancing BlueStore, implementing quality of service controls, and exploring data deduplication techniques. Looking ahead, SKT aims to further leverage NVRAM/SSD technologies and expand use of all-flash Ceph in its cloud infrastructure.
Presentation architecting a cloud infrastructuresolarisyourep
This document provides an agenda and overview for a session on architecting a cloud infrastructure. The agenda includes introductions, gathering requirements, sizing and scaling, host design, vCenter design, cluster design, networking and storage considerations. It emphasizes the importance of gathering requirements from customers and conceptualizing the design based on those requirements. It also discusses various design considerations and best practices for each component of a cloud infrastructure.
Presentation architecting a cloud infrastructurexKinAnx
This document provides an agenda and overview for a session on architecting a cloud infrastructure. The agenda includes introductions, gathering requirements, sizing and scaling, host design, vCenter design, cluster design, networking and storage considerations. It emphasizes the importance of gathering requirements from customers and conceptualizing the design based on those requirements. It also discusses various design considerations and best practices for each component of a cloud infrastructure.
This document discusses the benefits of using Linux on IBM Power systems servers. It claims that Power systems can reduce costs through higher performance, consolidation, and open source software like KVM and OpenStack. It seeks to dispel myths that Power systems are expensive, that virtualization is different, and that the architecture is closed. It provides examples of using Power systems with Linux to gain performance advantages for applications like SAP and databases through higher core counts, memory and bandwidth compared to x86 servers.
Sparc m6 32 in-memory infrastructure for the entire enterprisesolarisyougood
The document discusses Oracle's new SPARC M6-32 server. Key points include:
- It features 384 cores, 32TB of memory, and can scale to support very large databases and workloads entirely in memory.
- It offers 2x the cores and throughput compared to prior M5 servers, and can support queries up to 7x faster when run entirely in memory.
- Built-in virtualization allows for flexible logical partitioning without performance penalties. The system is designed for continuous availability.
Similar to SQLintersection keynote a tale of two teams (20)
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
31. Don’t forget to enter your evaluation
of this session using EventBoard!
Questions?
Thank you!
Editor's Notes
Interestingly, the default configuration of the server is generally quite good. Even at very high scale there is not much additional work that can be done. The closest you get to a magic: make SQL Server go faster traceflag is 834 (http://support.microsoft.com/kb/920093 and http://msdn2.microsoft.com/en-us/library/aa366720.aspx) for Windows large-page allocations for the buffer pool.If you see a flat node, it will fill up eventually if you start doing enough work in SQL.
On heavily OLTP systems, there’s enough NIC traffic that they need their own CPU cores to process the TCP work. Use affinity mask to segregate the NIC cores.Increasing connections to ~6000 (users had think time), you’ll started seeing waits on THREADPOOLSolution: increase sp_configure ‘max worker threads’Probably don’t want to go higher than 4096Gradually increase it, default max is 980Avoid killing yourself in thread management – bottleneck is likely somewhere else
(Should be around 4:30 pm)PCI-e v1 busX4 slot: 750M/secX8 slot: 1.5GB/secX16 – fast enough, around the 3GB/secSome “v2 compliant” PCI-e bus still run at v1 speeds!
Interesting Shape, what’s causing it?
The hardware between the CPU and the physical drive is often complexDifferent topologies, depending on vendor and technologyTwo major topologies for SQL Server StorageDAS – Direct Attached StorageStandards: (SCSI), SAS, SATARAID controller in the machinePCI-X or PCI-E direct accessSAN – Storage Area Networks Standards: iSCSI or Fibre Channel (FC)Host Bus Adapters or Network Cards in the machineSwitches / Fabric access to the diskSAN & Tiered Storage ArraysSANData is explicitly placed on various disk groups which the admin must track.Moving data between tiers is manual and typically offlineGranularity is whatever the admin decides to move.Depends on the admin tracking storage hot spots and usageTiered SANArray tracks usage patterns and automatically moves data between storage tiers.Data movement is in the background and fully onlineGranularity is LUN today, moving to finer-grainedDepends on the array, tracking storage hot spots and usage
(Should be around 4:45-4:50 pm)
(Should be around 5:00 – 5:10 pm)
First image = basic AG for high availability and disaster recoverySecond image = Node and File Share Majority quorum modelThird image = Node Majority quorum model
These are the default tools.
Open up the Always On view for Instance/Group Matrix. Illustrate the thick green lines from SQL2 to SQL4.Start the job on SQL2, “Keynote workload”. The lines will begin to turn red. Discuss how the IO load on SQL2 (Fusion-IO) is backing up on SQL4 (slow disks).Start the job on SQL1, “Keynote move C/F to SQL1”. Takes a couple minutes. Nodes shut down then flip over to SQL1. The restore to create the database file is super fast. Super-low latency because of Fusion-IO. May have time to run “Keynote workload” again on the SQL1 after the flip.Start the job on SQL1, “Revert”. Takes even more time. Main point here is how long it takes because we have to restore and reinitialize database on the slow disks. Then it may not finish in time because the initialization and restore take so much longer.