This document introduces the asm_metrics utility for monitoring Automatic Storage Management (ASM) metrics. The utility provides real-time ASM metrics like reads/writes per second and I/O times. It is customizable, allowing users to view metrics by ASM instance, database instance, diskgroup, or failgroup. The document provides several use cases for how admins can use asm_metrics to monitor I/O performance and balance across various ASM components.
Reduce Resource Consumption & Clone in Seconds your Oracle Virtual Environmen...BertrandDrouvot
Bertrand Drouvot will present how to minimize resource consumption on a laptop by using Linux containers (LXC) and the btrfs file system. This allows quickly cloning an Oracle virtual environment, software, and databases in seconds using few disk space. Specific use cases that will be demonstrated include cloning a database software home to apply CPU updates, cloning a database to apply CPU updates, and cloning a PDB. The benefits of using LXC for cloning will also be compared to cloning without LXC.
The document compares two methods for limiting CPU usage of databases on the same server: instance caging and processor_group_name binding. It provides facts about how each method works, observations on performance differences, and examples of customer cases where each method may be best. Instance caging allows limiting CPU count online but the SGA is interleaved, while binding groups databases to specific CPUs requiring a restart but keeps the SGA local. The best choice depends on factors like database count and whether guaranteed CPU resources are needed for some databases.
The document provides information about finding the location of OCR and voting disks in an Oracle RAC environment. It states that the OCR location can be found in the /etc/oracle/ocr.loc file and the voting disk location can be found using the crsctl query css votedisk command. It also provides information on backing up the OCR and voting disks, such as using dd to backup voting disks and ocrconfig to backup and restore OCR.
12cR2 Single-Tenant: Multitenant Features for All EditionsFranck Pachot
Multitenant architecture is available even without Oracle's multitenant option. In this session take a look at the overhead and the 12.2 new features so that you can choose among single-tenant or non-container databases. These features include agility in data movement, easy flashback, and fast upgrade.
Presentation at March 2019 Dutch Postgres User Group Meetup on lessons learnt while migrating from Oracle to Postgres, demo'ed via vagrant test environments and using generic pgbench datasets.
This document summarizes new features in Oracle 11g Data Guard. Key changes include enabling redo transport compression, active standby for real-time queries, creating snapshot standbys, automatically replacing corrupted blocks, building physical standbys with RMAN, allowing dynamic parameter changes for logical standbys, supporting compressed tables, and applying parallel DDLs in logical standbys.
The document discusses setting up an Oracle 12c Active Data Guard physical standby database using RMAN DUPLICATE FROM ACTIVE. It involves 3 steps:
1) Configuring the primary and standby databases, including creating required directories, adding static entries to listener.ora, and editing tnsnames.ora.
2) Running RMAN DUPLICATE FROM ACTIVE on the primary to create the standby database instance while it is in NOMOUNT mode.
3) After duplicate completes, configuring redo transport on both primary and standby, adding standby redo logs, and opening the standby database to start managed recovery.
Reduce Resource Consumption & Clone in Seconds your Oracle Virtual Environmen...BertrandDrouvot
Bertrand Drouvot will present how to minimize resource consumption on a laptop by using Linux containers (LXC) and the btrfs file system. This allows quickly cloning an Oracle virtual environment, software, and databases in seconds using few disk space. Specific use cases that will be demonstrated include cloning a database software home to apply CPU updates, cloning a database to apply CPU updates, and cloning a PDB. The benefits of using LXC for cloning will also be compared to cloning without LXC.
The document compares two methods for limiting CPU usage of databases on the same server: instance caging and processor_group_name binding. It provides facts about how each method works, observations on performance differences, and examples of customer cases where each method may be best. Instance caging allows limiting CPU count online but the SGA is interleaved, while binding groups databases to specific CPUs requiring a restart but keeps the SGA local. The best choice depends on factors like database count and whether guaranteed CPU resources are needed for some databases.
The document provides information about finding the location of OCR and voting disks in an Oracle RAC environment. It states that the OCR location can be found in the /etc/oracle/ocr.loc file and the voting disk location can be found using the crsctl query css votedisk command. It also provides information on backing up the OCR and voting disks, such as using dd to backup voting disks and ocrconfig to backup and restore OCR.
12cR2 Single-Tenant: Multitenant Features for All EditionsFranck Pachot
Multitenant architecture is available even without Oracle's multitenant option. In this session take a look at the overhead and the 12.2 new features so that you can choose among single-tenant or non-container databases. These features include agility in data movement, easy flashback, and fast upgrade.
Presentation at March 2019 Dutch Postgres User Group Meetup on lessons learnt while migrating from Oracle to Postgres, demo'ed via vagrant test environments and using generic pgbench datasets.
This document summarizes new features in Oracle 11g Data Guard. Key changes include enabling redo transport compression, active standby for real-time queries, creating snapshot standbys, automatically replacing corrupted blocks, building physical standbys with RMAN, allowing dynamic parameter changes for logical standbys, supporting compressed tables, and applying parallel DDLs in logical standbys.
The document discusses setting up an Oracle 12c Active Data Guard physical standby database using RMAN DUPLICATE FROM ACTIVE. It involves 3 steps:
1) Configuring the primary and standby databases, including creating required directories, adding static entries to listener.ora, and editing tnsnames.ora.
2) Running RMAN DUPLICATE FROM ACTIVE on the primary to create the standby database instance while it is in NOMOUNT mode.
3) After duplicate completes, configuring redo transport on both primary and standby, adding standby redo logs, and opening the standby database to start managed recovery.
Sql server engine cpu cache as the new ramChris Adkin
This document discusses CPU cache and memory architectures. It begins with a diagram showing the cache hierarchy from L1 to L3 cache within a CPU. It then discusses how larger CPUs have multiple cores, each with their own L1 and L2 caches sharing a larger L3 cache. The document highlights how main memory bandwidth has not kept up with increasing CPU speeds and caches.
This document provides information about Pythian, a company that provides database management and consulting services. It begins by introducing the presenter, Christo Kutrovsky, and his background. It then provides details about Pythian, including that it was founded in 1997, has over 200 employees, 200 customers worldwide, and 5 offices globally. It notes Pythian's partnerships and awards. The document emphasizes Pythian's expertise in Oracle, SQL Server, and other technologies. It positions Pythian as a recognized leader in database management.
The document provides guidance on different backup and recovery scenarios for both user-managed and RMAN-managed recovery in Oracle databases. It lists 7 user-managed recovery scenarios including recovering a missing system tablespace, non-system tablespace, or datafile. It also covers control file recovery and incomplete recovery up to a point in time or log sequence. For RMAN recovery, it recommends configuring automatic backups and retention policies and describes using RMAN to backup datafiles, control files, and archive logs.
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New FeaturesAmazon Web Services
Learn the specifics of Amazon RDS for PostgreSQL’s capabilities and extensions that make it powerful. This session begins with a brief overview of the RDS PostgreSQL service, how it provides High Availability & Durability and will then deep dive into the new features that we have released since re:Invent 2014, including major version upgrade and newly added PostgreSQL extensions to RDS PostgreSQL. During the session, we will also discuss lessons learned running a large fleet of PostgreSQL instances, including specific recommendations. In addition we will present benchmarking results looking at differences between the 9.3, 9.4 and 9.5 releases.
This document provides an overview of key differences between SQL Server and PostgreSQL databases. It covers topics such as extensions, cost, case sensitivity, operating systems, processor configuration, write-ahead logging (WAL), checkpoints, disabling writes, page corruptions, MVCC, vacuum, database snapshots, system databases, tables, indexes, statistics, triggers, functions, security, backups, replication, imports/exports, maintenance, and monitoring. The document aims to help SQL Server DBAs understand how to administer and work with PostgreSQL databases.
Size can creep up on you. Some day you may wake up to a multi-terabyte Postgres system handling over 3000 tps staring you down. Learn the best ways to manage these systems as they grow, and find out what new features in 9.0 have made life easier for administrators and application developers working with big data.
This talk will lead you through solutions to problems Postgres faces when it gets big: backups, transaction wraparound, bloat, huge catalogs and upgrades. You need to monitor the right things, find the gems in DBA-friendly database functions and catalog tables, and know the right places to look to spot problems early. We’ll also go over monitoring best practices and open source tools to get the job done.
Working with multiple versions of Postgres back to version 8.2 will be included, and as well as tips on making the most out of new features in 9.0. War stories will be taken from real-world work with Emma, an email marketing company with a few large databases.
The document discusses various PostgreSQL database hosting options on Amazon Web Services (AWS). It describes services like EC2 that allow running a customized PostgreSQL database on the cloud. It provides tips for setting up PostgreSQL replication, scaling the database vertically and horizontally, backups, monitoring with CloudWatch, and reducing costs. Other AWS services mentioned include S3, EBS, Redshift and tools for managing PostgreSQL on AWS.
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTanel Poder
From Tanel Poder's Troubleshooting Complex Performance Issues series - an example of Oracle SEG$ internal segment contention due to some direct path insert activity.
Cassandra EU 2012 - Storage Internals by Nicolas Favre-FelixAcunu
The document discusses Cassandra's storage internals. It describes how Cassandra writes data to memtables and commit logs in memory before flushing to immutable SSTables on disk. It also explains how compaction merges SSTables to reclaim space and improve performance. For reads, Cassandra uses memtables, bloom filters on SSTables, key caches, and row caches to minimize disk I/O. Counters are implemented by coordinating writes across replicas.
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. In this session, we will learn more about PostgreSQL extension framework, how are they built, look at some popular extensions, management of these extensions in your deployments.
New features in ProxySQL 2.0 (updated to 2.0.9) by Rene Cannao (ProxySQL)Altinity Ltd
ProxySQL 2.0 includes several new features such as query cache improvements, GTID causal reads for consistency, native Galera cluster support, Amazon Aurora integration, LDAP authentication, improved SSL support, a new audit log, and performance enhancements. It also adds new monitoring tables, variables, and configuration options to support these features.
AWR DB performance Data Mining - Collaborate 2015Yury Velikanov
Oracle database AWR performance repository is a hidden treasure. There are a lot of very useful details about your systems behavior hidden in that repository. This presentation designed to give you all knowledge you need to start leveraging the data more than standard AWR based reports allows you. The author will walk you through several practical examples from his experience where AWR proven to be one of the best information sources. You will learn how to start accessing AWR tables and few areas you should be careful with. We will wrap up the presentation with more examples and Q&A section.
Objective 1: Give enough information to start mining AWR tables to extract performance data for troubleshooting different issues
Objective 2: Demonstrate practical examples on how AWR has been used to troubleshoot different performance problems
Objective 3: Let you consider AWR as a good additional source for performance issues troubleshooting
This document describes how to configure MySQL database replication between a master and slave server. The key steps are:
1. Configure the master server by editing its configuration file to enable binary logging and set the server ID. Create a replication user and grant privileges.
2. Export the databases from the master using mysqldump.
3. Configure the slave server by editing its configuration file to point to the master server. Import the database dump. Start replication on the slave.
4. Verify replication is working by inserting data on the master and checking it is replicated to the slave.
MySQL Database – Basic User Guide
- The document discusses MySQL database architecture including physical and logical structures. It describes configuration files, log files, storage engines and SQL execution process. Key points covered include MySQL configuration file, error log, general log, slow query log, binary log and storage engines like InnoDB, MyISAM, MEMORY etc. User management topics like CREATE USER, GRANT, REVOKE are also summarized.
The document provides an overview of PostgreSQL performance tuning. It discusses caching, query processing internals, and optimization of storage and memory usage. Specific topics covered include the PostgreSQL configuration parameters for tuning shared buffers, work memory, and free space map settings.
This document provides an overview of implementing Oracle 10g RAC with Automatic Storage Management (ASM) on AIX. It describes ASM, which allows Oracle databases to store data in raw device files that are managed by ASM for striping and mirroring. The document discusses storage and administration considerations for using ASM, tuning parameters, and provides a sample ASM installation process and references.
The document provides steps to extract data from a Hyperion Essbase cube and load it into a relational database using Oracle Data Integrator (ODI). There are three methods for extracting data from Essbase - using a Calc script, Report script, or MDX query. The steps include creating a Calc script using the DATAEXPORT function to extract data to a text file, configuring the Essbase connection in ODI's topology, reversing the Essbase cube, establishing the target database connection, creating an ODI interface using the LKM Hyperion Essbase DATA to SQL knowledge module, and running the interface to load the extracted Essbase data into the relational database tables.
The document discusses various topics related to using MongoDB including schema design, indexing, concurrency, and durability. For schema design, it recommends using small document sizes and separating documents that grow unbounded into multiple collections. For indexing, it emphasizes ensuring queries use indexes and introduces sparse indexes and index-only queries. It notes concurrency is coarse-grained currently but being improved. For durability, it discusses storage, journaling, replication, and write concerns.
Improving Effeciency with Options in SASguest2160992
Learning
Base SAS,
Advanced SAS,
Proc SQl,
ODS,
SAS in financial industry,
Clinical trials,
SAS Macros,
SAS BI,
SAS on Unix,
SAS on Mainframe,
SAS interview Questions and Answers,
SAS Tips and Techniques,
SAS Resources,
SAS Certification questions...
visit http://sastechies.blogspot.com
Sql server engine cpu cache as the new ramChris Adkin
This document discusses CPU cache and memory architectures. It begins with a diagram showing the cache hierarchy from L1 to L3 cache within a CPU. It then discusses how larger CPUs have multiple cores, each with their own L1 and L2 caches sharing a larger L3 cache. The document highlights how main memory bandwidth has not kept up with increasing CPU speeds and caches.
This document provides information about Pythian, a company that provides database management and consulting services. It begins by introducing the presenter, Christo Kutrovsky, and his background. It then provides details about Pythian, including that it was founded in 1997, has over 200 employees, 200 customers worldwide, and 5 offices globally. It notes Pythian's partnerships and awards. The document emphasizes Pythian's expertise in Oracle, SQL Server, and other technologies. It positions Pythian as a recognized leader in database management.
The document provides guidance on different backup and recovery scenarios for both user-managed and RMAN-managed recovery in Oracle databases. It lists 7 user-managed recovery scenarios including recovering a missing system tablespace, non-system tablespace, or datafile. It also covers control file recovery and incomplete recovery up to a point in time or log sequence. For RMAN recovery, it recommends configuring automatic backups and retention policies and describes using RMAN to backup datafiles, control files, and archive logs.
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New FeaturesAmazon Web Services
Learn the specifics of Amazon RDS for PostgreSQL’s capabilities and extensions that make it powerful. This session begins with a brief overview of the RDS PostgreSQL service, how it provides High Availability & Durability and will then deep dive into the new features that we have released since re:Invent 2014, including major version upgrade and newly added PostgreSQL extensions to RDS PostgreSQL. During the session, we will also discuss lessons learned running a large fleet of PostgreSQL instances, including specific recommendations. In addition we will present benchmarking results looking at differences between the 9.3, 9.4 and 9.5 releases.
This document provides an overview of key differences between SQL Server and PostgreSQL databases. It covers topics such as extensions, cost, case sensitivity, operating systems, processor configuration, write-ahead logging (WAL), checkpoints, disabling writes, page corruptions, MVCC, vacuum, database snapshots, system databases, tables, indexes, statistics, triggers, functions, security, backups, replication, imports/exports, maintenance, and monitoring. The document aims to help SQL Server DBAs understand how to administer and work with PostgreSQL databases.
Size can creep up on you. Some day you may wake up to a multi-terabyte Postgres system handling over 3000 tps staring you down. Learn the best ways to manage these systems as they grow, and find out what new features in 9.0 have made life easier for administrators and application developers working with big data.
This talk will lead you through solutions to problems Postgres faces when it gets big: backups, transaction wraparound, bloat, huge catalogs and upgrades. You need to monitor the right things, find the gems in DBA-friendly database functions and catalog tables, and know the right places to look to spot problems early. We’ll also go over monitoring best practices and open source tools to get the job done.
Working with multiple versions of Postgres back to version 8.2 will be included, and as well as tips on making the most out of new features in 9.0. War stories will be taken from real-world work with Emma, an email marketing company with a few large databases.
The document discusses various PostgreSQL database hosting options on Amazon Web Services (AWS). It describes services like EC2 that allow running a customized PostgreSQL database on the cloud. It provides tips for setting up PostgreSQL replication, scaling the database vertically and horizontally, backups, monitoring with CloudWatch, and reducing costs. Other AWS services mentioned include S3, EBS, Redshift and tools for managing PostgreSQL on AWS.
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTanel Poder
From Tanel Poder's Troubleshooting Complex Performance Issues series - an example of Oracle SEG$ internal segment contention due to some direct path insert activity.
Cassandra EU 2012 - Storage Internals by Nicolas Favre-FelixAcunu
The document discusses Cassandra's storage internals. It describes how Cassandra writes data to memtables and commit logs in memory before flushing to immutable SSTables on disk. It also explains how compaction merges SSTables to reclaim space and improve performance. For reads, Cassandra uses memtables, bloom filters on SSTables, key caches, and row caches to minimize disk I/O. Counters are implemented by coordinating writes across replicas.
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. In this session, we will learn more about PostgreSQL extension framework, how are they built, look at some popular extensions, management of these extensions in your deployments.
New features in ProxySQL 2.0 (updated to 2.0.9) by Rene Cannao (ProxySQL)Altinity Ltd
ProxySQL 2.0 includes several new features such as query cache improvements, GTID causal reads for consistency, native Galera cluster support, Amazon Aurora integration, LDAP authentication, improved SSL support, a new audit log, and performance enhancements. It also adds new monitoring tables, variables, and configuration options to support these features.
AWR DB performance Data Mining - Collaborate 2015Yury Velikanov
Oracle database AWR performance repository is a hidden treasure. There are a lot of very useful details about your systems behavior hidden in that repository. This presentation designed to give you all knowledge you need to start leveraging the data more than standard AWR based reports allows you. The author will walk you through several practical examples from his experience where AWR proven to be one of the best information sources. You will learn how to start accessing AWR tables and few areas you should be careful with. We will wrap up the presentation with more examples and Q&A section.
Objective 1: Give enough information to start mining AWR tables to extract performance data for troubleshooting different issues
Objective 2: Demonstrate practical examples on how AWR has been used to troubleshoot different performance problems
Objective 3: Let you consider AWR as a good additional source for performance issues troubleshooting
This document describes how to configure MySQL database replication between a master and slave server. The key steps are:
1. Configure the master server by editing its configuration file to enable binary logging and set the server ID. Create a replication user and grant privileges.
2. Export the databases from the master using mysqldump.
3. Configure the slave server by editing its configuration file to point to the master server. Import the database dump. Start replication on the slave.
4. Verify replication is working by inserting data on the master and checking it is replicated to the slave.
MySQL Database – Basic User Guide
- The document discusses MySQL database architecture including physical and logical structures. It describes configuration files, log files, storage engines and SQL execution process. Key points covered include MySQL configuration file, error log, general log, slow query log, binary log and storage engines like InnoDB, MyISAM, MEMORY etc. User management topics like CREATE USER, GRANT, REVOKE are also summarized.
The document provides an overview of PostgreSQL performance tuning. It discusses caching, query processing internals, and optimization of storage and memory usage. Specific topics covered include the PostgreSQL configuration parameters for tuning shared buffers, work memory, and free space map settings.
This document provides an overview of implementing Oracle 10g RAC with Automatic Storage Management (ASM) on AIX. It describes ASM, which allows Oracle databases to store data in raw device files that are managed by ASM for striping and mirroring. The document discusses storage and administration considerations for using ASM, tuning parameters, and provides a sample ASM installation process and references.
The document provides steps to extract data from a Hyperion Essbase cube and load it into a relational database using Oracle Data Integrator (ODI). There are three methods for extracting data from Essbase - using a Calc script, Report script, or MDX query. The steps include creating a Calc script using the DATAEXPORT function to extract data to a text file, configuring the Essbase connection in ODI's topology, reversing the Essbase cube, establishing the target database connection, creating an ODI interface using the LKM Hyperion Essbase DATA to SQL knowledge module, and running the interface to load the extracted Essbase data into the relational database tables.
The document discusses various topics related to using MongoDB including schema design, indexing, concurrency, and durability. For schema design, it recommends using small document sizes and separating documents that grow unbounded into multiple collections. For indexing, it emphasizes ensuring queries use indexes and introduces sparse indexes and index-only queries. It notes concurrency is coarse-grained currently but being improved. For durability, it discusses storage, journaling, replication, and write concerns.
Improving Effeciency with Options in SASguest2160992
Learning
Base SAS,
Advanced SAS,
Proc SQl,
ODS,
SAS in financial industry,
Clinical trials,
SAS Macros,
SAS BI,
SAS on Unix,
SAS on Mainframe,
SAS interview Questions and Answers,
SAS Tips and Techniques,
SAS Resources,
SAS Certification questions...
visit http://sastechies.blogspot.com
The document discusses Unix kernel parameters that should be monitored and potentially increased after making changes to related Oracle Init.ora parameters. It provides a table matching Init.ora parameters like db_block_buffers and processes to Unix kernel parameters like shmmax and nproc. It also defines several common Unix kernel parameters and provides references on Unix configuration files where semaphores and shared memory can be set for different Unix platforms.
This document provides an overview of Amazon Elastic MapReduce (EMR), including:
1) EMR allows users to quickly and cost-effectively process vast amounts of data by providing a managed Hadoop framework and supporting popular distributed frameworks like Spark.
2) The document demonstrates how to use EMR for tasks like clickstream analysis, log processing, and genomic research through example use cases.
3) It outlines the agenda which will cover Hadoop fundamentals, EMR features, how to get started, supported tools, and additional resources.
Amazon EMR enables fast processing of large structured or unstructured datasets, and in this presentation we'll show you how to setup an Amazon EMR job flow to analyse application logs, and perform Hive queries against it. We also review best practices around data file organisation on Amazon Simple Storage Service (S3), how clusters can be started from the AWS web console and command line, and how to monitor the status of a Map/Reduce job.
Finally we take a look at Hadoop ecosystem tools you can use with Amazon EMR and the additional features of the service.
See a recording of the webinar based on this presentation on YouTube here:
Check out the rest of the Masterclass webinars for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
See the Journey Through the Cloud webinar series here: http://aws.amazon.com/campaigns/emea/journey/
DataStax | Building a Spark Streaming App with DSE File System (Rocco Varela)...DataStax
In this talk, we review a real-world use case that tested the Cassandra+Spark stack on Datastax Enterprise (DSE). We also cover implementation details around application high availability and fault tolerance using the new DSE File System (DSEFS). From a field and testing perspective, we discuss the strategies we can leverage to meet our requirements. Such requirements include (but not limited to) functional coverage, system integration, usability, and performance. We will discuss best practices and lessons we learned covering everything from application development to DSE setup and tuning.
About the Speaker
Rocco Varela Software Engineer in Test, DataStax
After earning his PhD in bioinformatics from UCSF, Rocco Varela took his passion for technology to DataStax. At DataStax he works on several aspects of performance and test automation around DataStax Enterprise (DSE) integrated offerings such as Apache Spark, Hadoop, Solr, and more recently DSE Graph.
The document is a presentation about Panasas storage for Saudi Aramco. It begins with an agenda that covers understanding the Panasas storage technique, its technical details, common error traces, and problem solving. It then provides bullet points on starting the session, the terminology used, how Panasas works, and fault fixing methods. The presentation defines key Panasas components like blades, directors, volumes, and snapshots. It explains how data is stored across object storage devices and reconstructed in the event of failures. Methods for upgrading, generating core dumps, and analyzing logs are also overviewed.
EEvolution slides from EEUK2013 to use as a reference to our talk. Let us know if you need a hand with anything or further explanation... we know it was quite a heavy presentation.
There are several techniques to optimize performance in Essbase applications, including designing the outline using the hourglass model, defragmentation, restructuring, compression techniques, and cache settings. The best technique for optimizing data loads is to order dimensions in the source file from largest to smallest dense dimensions followed by largest to smallest sparse dimensions, as this allows blocks to be created and filled sequentially for faster loading. Calculation performance can be improved through techniques like parallel calculation, formulas, caching, and two-pass calculation.
Data analysis is being used to transform businesses, increase efficiency, and drive innovation. But organizations need to perform increasingly complex analysis on their data (streaming analytics, ad-hoc querying and predictive analytics) in order to get better insights and actionable business intelligence. The growing data volume, speed, and complexity of diverse data formats make legacy tools inadequate or difficult to use. The AWS Cloud has a comprehensive portfolio of analytics services to help you process data of any volume and automate how you put that data to work for your organization. In this session we’ll see how to put those services at work on structured, unstructured and real-time data.
Building Apache Cassandra clusters for massive scaleAlex Thompson
Covering theory and operational aspects of bring up Apache Cassandra clusters - this presentation can be used as a field reference. Presented by Alex Thompson at the Sydney Cassandra Meetup.
T3 is an optimized protocol used to transport data between WebLogic Server and other Java programs. WebLogic Server tracks each Java Virtual Machine (JVM) it connects to and creates a single T3 connection to carry all traffic for a JVM. For example, if a client accesses an enterprise bean and JDBC connection pool on WebLogic Server, a single network connection is established between the WebLogic Server JVM and the client JVM.
This document discusses managing cluster parameters in Virtuozzo Storage. It describes:
- Cluster parameters control creating, locating, and managing replicas for data chunks. They include replication, encoding, and location parameters.
- Replication parameters define the normal and minimum number of replicas for data chunks. These are usually set to 3 and 2 respectively.
- Location parameters like failure domains determine where replicas are placed, such as by host, rack, or room, to avoid correlated failures bringing down all replicas.
- Encoding can provide redundancy through erasure coding instead of replication.
The document discusses Apache Cassandra, a distributed database management system designed to handle large amounts of data across many commodity servers. It was developed at Facebook and modeled after Google's Bigtable. The summary discusses key concepts like its use of consistent hashing to distribute data, support for tunable consistency levels, and focus on scalability and availability over traditional SQL features. It also provides an overview of how Cassandra differs from relational databases by not supporting joins, having an optional schema, and using a prematerialized and transaction-less model.
This document discusses optimizations for TCP/IP networking performance on multicore systems. It describes several inefficiencies in the Linux kernel TCP/IP stack related to shared resources between cores, broken data locality, and per-packet processing overhead. It then introduces mTCP, a user-level TCP/IP stack that addresses these issues through a thread model with pairwise threading, batch packet processing from I/O to applications, and a BSD-like socket API. mTCP achieves a 2.35x performance improvement over the kernel TCP/IP stack on a web server workload.
Jugal Shah has over 14 years of experience in IT working in roles such as manager, solution architect, DBA, developer and software engineer. He has worked extensively with database technologies including SQL Server, MySQL, PostgreSQL and others. He has received the MVP award from Microsoft for SQL Server in multiple years. Common causes of SQL Server performance problems include configuration issues, design problems, bottlenecks and poorly written queries or code. Various tools can be used to diagnose issues including dynamic management views, Performance Monitor, SQL Server Profiler and DBCC commands.
Leveraging Open Source to Manage SAN Performancebrettallison
Scope - The primary focus of this presentation is how to leverage open source software to help in managing Shared Storage performance. The storage server will be the focus with particular emphasis on ESS. This solution is a small one-off solution.
Similar to Automatic Storage Management (ASM) metrics are a goldmine: Let's use them! (20)
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
2. About Me
Oracle DBA since 1999
OCP 9i,10g,11g
Rac certified Expert
Exadata certified implementation specialist
Blogger since 2012
@bertranddrouvot
BasketBall fan
3. Are you happy with?
asmcmd iostat?
asmiostat.sh from MOS [ID 437996.1])?
Me, I am not: The metrics provided are not
enough, the way we can extract and display
them is not customizable enough, and we
don’t see the I/O repartitions within all the
ASM or database instances into a RAC
environment.
4. Welcome to asm_metrics
1. It provides useful real-time metrics:
Reads/s: Number of read per second.
KbyRead/s: Kbytes read per second.
Avg ms/Read: ms per read in average.
AvgBy/Read: Average Bytes per read.
Writes/s: Number of write per second.
KbyWrite/s: Kbytes write per second.
Avg ms/Write: ms per write in average.
AvgBy/Write: Average Bytes per write.
2. It is RAC aware: You can display the metrics for all the ASM and (or)
database instances or just a subset.
3. You can aggregate the results following your needs in a customizable way:
Aggregate per ASM Instances, database instances, Diskgroup, Failgroup or
a combination of all of them.
4. It does not need any change to the source: Simply download it and use it.
5. How does it work?
The script takes a snapshot each second (default interval)
from the gv$asm_disk_iostat cumulative view (or
gv$asm_disk_stat) and computes the delta with the
previous snapshot.
The only difference with gv$asm_disk_stat is the
information available in memory while v$asm_disk access
the disks to re-collect some information.
Since the information required doesn’t require to “re-collect”
it from the disks (as a discovery of new disks is
not needed), gv$asm_disk_stat is more appropriated here.
6. Let’s use it
Important remark:
The blank value for one of those fields (INST, DBINST, DG, FG, DSK) means that the values have
been aggregated for this particular field.
7. How are the metrics computed?
The metrics are computed this way:
Reads/s comes from the delta computation of the READS column divided by the
snapshot wait interval.
KbyRead/s comes from the delta computation of the BYTES_READ column divided by
the snapshot wait interval.
Avg ms/Read comes from the delta computation of the READ_TIME / READS columns.
AvgBy/Read comes from the delta computation of the BYTES_READ / READS columns.
Writes/s comes from the delta computation of the WRITES column divided by the
snapshot wait interval.
KbyWrite/s comes from the delta computation of the BYTES_WRITTEN column divided
by the snapshot wait interval.
Avg ms/Write comes from the delta computation of the WRITE_TIME / WRITES
columns.
AvgBy/Write comes from the delta computation of the BYTES_WRITTEN / WRITES
columns.
8. What are the features? (1/3)
To explain the features, let’s have a look to
the help
9. What are the features? (2/3)
1. You can choose the number of snapshots to display and the time to wait between the
snapshots. The purpose is to see a limited number of snapshots of a specified amount of
wait time between snapshots.
2. You can choose on which ASM instance to collect the metrics thanks to the -
INST= parameter. Useful in RAC configuration to see the repartition of the ASM metrics
per ASM instances.
3. You can choose for which DB instance to collect the metrics thanks to the -
DBINST= parameter (wildcard allowed). In case you need to focus on a particular
database or a subset of them.
4. You can choose on which Diskgroup to collect the metrics thanks to the -DG= parameter
(wildcard allowed). In case you need to focus on a particular diskgroup or a subset of
them.
5. You can choose on which Failgroup to collect the metrics thanks to the -FG= parameter
(wildcard allowed). In case you need to focus on a particular failgroup or a subset of
them.
10. What are the features? (3/3)
6. You can choose on which Exadata Cells to collect the metrics thanks to the -
IP= parameter (wildcard allowed). In case you need to focus on a particular cell or a
subset of them.
7. You can aggregate the results on the ASM instances, DB instances, Diskgroup, Failgroup
(or Exadata cells IP) level thanks to the -SHOW= parameter. Useful to get an overview of
what is going on per ASM Instances, per diskgroup or whatever you want, as this is fully
customizable.
8. You can display the metrics per snapshot, the average metrics value since the collection
began (that is to say since the script has been launched) or both thanks to the -
DISPLAY= parameter. So that you can get the metrics per snapshots, since the script has
been launched or both.
9. You can sort based on the number of reads, number of writes or number of IOPS
(reads+writes) thanks to the -SORT_FIELD= parameter (so that you could for example
find out which database is the top responsible for the I/O). So that you can find the ASM
instances, the database Instances, or the diskgroup, or the failgroup or whatever you
want that is generating most of the I/O reads, most of the I/O writes or most of the IOPS
(reads+writes).
11. Find out the most physical IO consumers through
ASM in real time. This is useful as you don’t need to
connect to any database instance to get this info as
this is "centralized" into the ASM instances.
Let's sort first based on the number of reads per
second that way:
./asm_metrics.pl -show=dbinst -sort_field=reads
11
Use case 1
12. I want to see the ASM preferred read in action for a particular diskgroup
(BDT_PREF for example) and see the IO metrics for the associated
failgroups. I want to see that no reads are done "outside" the preferred
failgroup.
Let’s configure the ASM preferred read parameters:
SQL> alter system set asm_preferred_read_failure_groups='BDT_PREF.WIN' sid='+ASM1';
System altered.
SQL> alter system set asm_preferred_read_failure_groups='BDT_PREF.JMO' sid='+ASM2';
System altered.
And check its behaviour thanks to the utility:
./asm_metrics.pl -show=dg,inst,fg -dg=BDT_PREF
12
Use case 2
13. I want to see the IO distribution on Exadata across the Cells
(storage nodes). For example I want to check that the IO
load is well balanced across all the cells. This is feasible
thanks to the show=ip option:
./asm_metrics.pl -show=dbinst,dg,ip -dg=BDT
13
Use case 3
14. I want to see the IO distribution recorded into the ASM instances:
./asm_metrics.pl -show=inst
I want to see the IO distribution recorded into the ASM instances for each
database instance:
./asm_metrics.pl -show=inst,dbinst
I want to see the IO distribution recorded into the ASM instances for the
database instances linked to the BDT database:
./asm_metrics.pl -show=inst,dbinst -dbinst=%BDT%
14
Use case 4, 5 & 6
15. I want to see the IO distribution over the FAILGROUPS:
./asm_metrics.pl -show=fg
I want to see the IO distribution and their associated metrics across the
ASM instances and the failgroups:
./asm_metrics.pl -show=fg,inst
I want to see the IO distribution across the ASM instances, diskgroups and
failgroups:
./asm_metrics.pl -show=fg,inst,dg
15
Use case 7,8 & 9
16. The use cases focused only on snapshots taken during the last second
but you could also:
Takes snapshots of longer period of time thanks to the
interval parameter:
./asm_metrics.pl -interval=10 (for snaps of 10 seconds)
View the average since the collection began (not only the
snaps delta) thanks to the display parameter that way:
./asm_metrics.pl -show=dbinst -sort_field=iops -display=avg
16
Remark
17. For this I created the csv_asm_metrics utility to produce a csv file from the output
of the asm_metrics utility.
Once you get the csv file you can graph the metrics with your favourite
visualization tool (I’ll use Tableau as an example).
First you have to launch the asm_metrics utility that way (To ensure that all the
fields are displayed):
-show=inst,dbinst,fg,dg,dsk for ASM >= 11g
-show=inst,fg,dg,dsk for ASM < 11g
and redirect the output to a text file:
./asm_metrics.pl -show=inst,dbinst,fg,dg,dsk > asm_metrics.txt
17
Graphing ASM metrics
18. Produce the csv file
./csv_asm_metrics.pl -if=asm_metrics.txt -of=asm_metrics.csv -
d='2014/07/04'
The csv file looks like:
Snap Time,INST,DBINST,DG,FG,DSK,Reads/s,Kby Read/s,ms/Read,By/Read,Writes/s,Kby Write/s,ms/Write,By/Write
2014/07/04 13:48:54,+ASM1,BDT10_1,DATA,HOST31,HOST31CA0D1C,0,0,0.0,0,0,0,0.0,0
2014/07/04 13:48:54,+ASM1,BDT10_1,DATA,HOST31,HOST31CA0D1D,0,0,0.0,0,0,0,0.0,0
2014/07/04 13:48:54,+ASM1,BDT10_1,DATA,HOST32,HOST32CA0D1C,0,0,0.0,0,0,0,0.0,0
2014/07/04 13:48:54,+ASM1,BDT10_1,DATA,HOST32,HOST32CA0D1D,2,32,0.2,16384,0,0,0.0,0
2014/07/04 13:48:54,+ASM1,BDT10_1,FRA,HOST31,HOST31CC8D0F,0,0,0.0,0,0,0,0.0,0
2014/07/04 13:48:54,+ASM1,BDT10_1,FRA,HOST32,HOST32CC8D0F,0,0,0.0,0,0,0,0.0,0
2014/07/04 13:48:54,+ASM1,BDT10_1,REDO1,HOST31,HOST31CC0D13,0,0,0.0,0,0,0,0.0,0
As you can see:
1.The day has been added (to create a date) and next ones will be calculated (should the snaps 18
26. Thanks to these use cases, I hope you can see how
customizable the utility is and how you could take
benefit of it in a day-to-day work with ASM.
The main entry for the tool is located to this blog
page:
http://bdrouvot.wordpress.com/asm_metrics_scri
pt/ from which you’ll be able to download the
script or copy the source code.
Feel free to download it and to provide any
feedback.
26
Conclusion