- Properly using parallel DML (PDML) for ETL can improve performance by leveraging multiple CPUs/cores.
- To enable PDML, it must be enabled at the system, session, or statement level. Additional steps may be needed to ensure the optimizer chooses a parallel plan.
- Considerations for using PDML include available parallel servers, restrictions like triggers or foreign keys, and implications on transactions.
- Oracle has different methods for data loading in PDML like HWM, TSM, and HWMB that impact extent allocation and fragmentation.
- The PQ_DISTRIBUTE hint controls how rows are distributed among parallel servers during the load to optimize performance and scalability.
"It can always get worse!" – Lessons Learned in over 20 years working with Or...Markus Michalewicz
First presented during the DOAG 2022 Conference and Exhibition, this presentation discusses and reviews the most significant lessons learned in over 20 years of working with Oracle Maximum Availability Architecture. It explains why documentation is good, but automated checks are better, and why standardization can help increase the availability of nearly all systems, including database systems.
Oracle RAC 12c Practical Performance Management and Tuning as presented during Oracle Open World 2013 with Michael Zoll.
This is part three of the Oracle RAC 12c "reindeer series" used for OOW13 Oracle RAC-related presentations.
This part concludes the main part of the "reindeer series" except for one bonus track "Oracle Multitenant meets Oracle RAC 12c" (available via SlidesShare, too).
This presentation is based on Lawrence To's Maximum Availability Architecture (MAA) Oracle Open World Presentation talking about the latest updates on high availability (HA) best practices across multiple architectures, features and products in Oracle Database 19c. It considers all workloads, OLTP, DWH and analytics, mixed workload as well as on-premises and cloud-based deployments.
Maximum Availability Architecture - Best Practices for Oracle Database 19cGlen Hawkins
Provides the latest updates on high availability (HA) best practices in this well-established technical deep-dive session. Learn how to optimize all aspects of Oracle Active Data Guard 19c. See how to use session draining, transparent application continuity, Oracle RAC, and Oracle GoldenGate to mask outages and planned maintenance from users and to accelerate time to repair for single database or your fleet of databases. Hear about the latest HA best practices with Oracle Multitenant and understand how the new sharded architecture can achieve even higher levels of HA and fault isolation for OLTP applications. Find out how everything you know about Oracle Maximum Availability Architecture (MAA) on-premises can be deployed in the cloud.
Any DBA from beginner to advanced level, who wants to fill in some gaps in his/her knowledge about Performance Tuning on an Oracle Database, will benefit from this workshop.
The Top 5 Reasons to Deploy Your Applications on Oracle RACMarkus Michalewicz
A presentation for developers, DBAs, and managers. This presentation was first presented in course of the AIOUG Maximum Availability Architecture (MAA)-focus month August 2021. The first reason might surprise you!
"It can always get worse!" – Lessons Learned in over 20 years working with Or...Markus Michalewicz
First presented during the DOAG 2022 Conference and Exhibition, this presentation discusses and reviews the most significant lessons learned in over 20 years of working with Oracle Maximum Availability Architecture. It explains why documentation is good, but automated checks are better, and why standardization can help increase the availability of nearly all systems, including database systems.
Oracle RAC 12c Practical Performance Management and Tuning as presented during Oracle Open World 2013 with Michael Zoll.
This is part three of the Oracle RAC 12c "reindeer series" used for OOW13 Oracle RAC-related presentations.
This part concludes the main part of the "reindeer series" except for one bonus track "Oracle Multitenant meets Oracle RAC 12c" (available via SlidesShare, too).
This presentation is based on Lawrence To's Maximum Availability Architecture (MAA) Oracle Open World Presentation talking about the latest updates on high availability (HA) best practices across multiple architectures, features and products in Oracle Database 19c. It considers all workloads, OLTP, DWH and analytics, mixed workload as well as on-premises and cloud-based deployments.
Maximum Availability Architecture - Best Practices for Oracle Database 19cGlen Hawkins
Provides the latest updates on high availability (HA) best practices in this well-established technical deep-dive session. Learn how to optimize all aspects of Oracle Active Data Guard 19c. See how to use session draining, transparent application continuity, Oracle RAC, and Oracle GoldenGate to mask outages and planned maintenance from users and to accelerate time to repair for single database or your fleet of databases. Hear about the latest HA best practices with Oracle Multitenant and understand how the new sharded architecture can achieve even higher levels of HA and fault isolation for OLTP applications. Find out how everything you know about Oracle Maximum Availability Architecture (MAA) on-premises can be deployed in the cloud.
Any DBA from beginner to advanced level, who wants to fill in some gaps in his/her knowledge about Performance Tuning on an Oracle Database, will benefit from this workshop.
The Top 5 Reasons to Deploy Your Applications on Oracle RACMarkus Michalewicz
A presentation for developers, DBAs, and managers. This presentation was first presented in course of the AIOUG Maximum Availability Architecture (MAA)-focus month August 2021. The first reason might surprise you!
The biggest headine at the 2009 Oracle OpenWorld was when Larry Ellison announced that Oracle was entering the hardware business with a pre-built database machine, engineered by Oracle. Since then businesses around the world have started to use these engineered systems. This beginner/intermediate-level session will take you through my first 100 days of starting to administer an Exadata machine and all the roadblocks and all the success I had along this new path.
Oracle RAC on Extended Distance Clusters - PresentationMarkus Michalewicz
NOTE that a newer version of this presentation (covering Oracle RAC 12c Release) has been uploaded to my SlideShare: https://www.slideshare.net/MarkusMichalewicz/oracle-extended-clusters-for-oracle-rac
This presentation can be used as an illustration for some of the ideas and best practices discussed in the paper "Oracle RAC and Oracle RAC One Node on Extended Distance (Stretched) Clusters"
This presentation provides a clear overview of how Oracle Database In-Memory optimizes both analytics and mixed workloads, delivering outstanding performance while supporting real-time analytics, business intelligence, and reporting. It provides details on what you can expect from Database In-Memory in both Oracle Database 12.1.0.2 and 12.2.
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTanel Poder
From Tanel Poder's Troubleshooting Complex Performance Issues series - an example of Oracle SEG$ internal segment contention due to some direct path insert activity.
This is a recording of my Advanced Oracle Troubleshooting seminar preparation session - where I showed how I set up my command line environment and some of the main performance scripts I use!
Oracle RAC is an option to the Oracle Database Enterprise Edition. At least, this is what it is known for. This presentation shows the many ways in which the stack, which is known as Oracle RAC can be used in the most efficient way for various use cases.
Redefining tables online without surprisesNelson Calero
The Oracle database includes several features to allow moving data online, ie: without preventing users to access it when it is being moved (DML operation are not blocked).
One of those features is to change a table definition, using the package DBMS_REDEFINITION.
While moving a table is an online operation since version 12.2, redefinition is still needed for some changes. Also is needed in older versions.
In this session best practices will be shown based on experience of using it with big tablespaces, with examples covering all the steps needed to use DBMS_REDEFINITION under different scenarios, including the problems you can find, how to resolve them and how this process is different in version 11.2 and 12.
Are your Oracle databases highly available? You have deployed Real Application Clusters (RAC), Data Guard, or Failover Clusters and are well protected against server failures? Great – the prerequisites for a highly available environment are given. However, to assure that backend infrastructure failures also remain transparent to the client, an appropriate configuration is a prerequisite.
This lecture will discuss the Oracle technologies that can be used to achieve automatic client failover functionality. What are the advantages, but also the limitations of these technologies?
Make Your Application “Oracle RAC Ready” & Test For ItMarkus Michalewicz
This presentation talks about the secrets behind Oracle RAC’s horizontal scaling algorithm, Cache Fusion, and how you can ensure that your application is “Oracle RAC ready.”. It discusses do's and don'ts and how to test your application for "Oracle RAC readiness". This version was first presented in Sangam19.
The biggest headine at the 2009 Oracle OpenWorld was when Larry Ellison announced that Oracle was entering the hardware business with a pre-built database machine, engineered by Oracle. Since then businesses around the world have started to use these engineered systems. This beginner/intermediate-level session will take you through my first 100 days of starting to administer an Exadata machine and all the roadblocks and all the success I had along this new path.
Oracle RAC on Extended Distance Clusters - PresentationMarkus Michalewicz
NOTE that a newer version of this presentation (covering Oracle RAC 12c Release) has been uploaded to my SlideShare: https://www.slideshare.net/MarkusMichalewicz/oracle-extended-clusters-for-oracle-rac
This presentation can be used as an illustration for some of the ideas and best practices discussed in the paper "Oracle RAC and Oracle RAC One Node on Extended Distance (Stretched) Clusters"
This presentation provides a clear overview of how Oracle Database In-Memory optimizes both analytics and mixed workloads, delivering outstanding performance while supporting real-time analytics, business intelligence, and reporting. It provides details on what you can expect from Database In-Memory in both Oracle Database 12.1.0.2 and 12.2.
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTanel Poder
From Tanel Poder's Troubleshooting Complex Performance Issues series - an example of Oracle SEG$ internal segment contention due to some direct path insert activity.
This is a recording of my Advanced Oracle Troubleshooting seminar preparation session - where I showed how I set up my command line environment and some of the main performance scripts I use!
Oracle RAC is an option to the Oracle Database Enterprise Edition. At least, this is what it is known for. This presentation shows the many ways in which the stack, which is known as Oracle RAC can be used in the most efficient way for various use cases.
Redefining tables online without surprisesNelson Calero
The Oracle database includes several features to allow moving data online, ie: without preventing users to access it when it is being moved (DML operation are not blocked).
One of those features is to change a table definition, using the package DBMS_REDEFINITION.
While moving a table is an online operation since version 12.2, redefinition is still needed for some changes. Also is needed in older versions.
In this session best practices will be shown based on experience of using it with big tablespaces, with examples covering all the steps needed to use DBMS_REDEFINITION under different scenarios, including the problems you can find, how to resolve them and how this process is different in version 11.2 and 12.
Are your Oracle databases highly available? You have deployed Real Application Clusters (RAC), Data Guard, or Failover Clusters and are well protected against server failures? Great – the prerequisites for a highly available environment are given. However, to assure that backend infrastructure failures also remain transparent to the client, an appropriate configuration is a prerequisite.
This lecture will discuss the Oracle technologies that can be used to achieve automatic client failover functionality. What are the advantages, but also the limitations of these technologies?
Make Your Application “Oracle RAC Ready” & Test For ItMarkus Michalewicz
This presentation talks about the secrets behind Oracle RAC’s horizontal scaling algorithm, Cache Fusion, and how you can ensure that your application is “Oracle RAC ready.”. It discusses do's and don'ts and how to test your application for "Oracle RAC readiness". This version was first presented in Sangam19.
AWS June 2016 Webinar Series - Amazon Redshift or Big Data AnalyticsAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and tune query and database performance.
Learning Objectives:
Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
Take an in-depth look at data warehousing with Amazon Redshift and get answers to your technical questions. We will cover performance tuning techniques that take advantage of Amazon Redshift's columnar technology and massively parallel processing architecture. We will also discuss best practices for migrating from existing data warehouses, optimizing your schema, loading data efficiently, and using work load management and interleaved sorting.
Reduce planned database down time with Oracle technologyKirill Loifman
How to design an Oracle database system to minimize planned interruptions? That depends on the requirements, goals, SLAs etc. The presentation will follow top-down approach. First we will describe major types of planned maintenance, prioritize those and then based on the system availability requirements find the best cost-effective technics to address those. A bit of planning, strategy and of course modern database and OS technics including latest Oracle 12c features.
OracleStore: A Highly Performant RawStore Implementation for Hive MetastoreDataWorks Summit
Today, Yahoo! uses Hive in many different spaces, from ETL pipelines to adhoc user queries. Increasingly, we are investigating the practicality of applying Hive to real-time queries, such as those generated by interactive BI reporting systems. In order for Hive to succeed in this space, it must be performant in all aspects of query execution, from query compilation to job execution. One such component is the interaction with the underlying database at the core of the Metastore.
As an alternative to ObjectStore, we created OracleStore as a proof-of-concept. Freed of the restrictions imposed by DataNucleus, we were able to design a more performant database schema that better met our needs. Then, we implemented OracleStore with specific goals built-in from the start, such as ensuring the deduplication of data.
In this talk we will discuss the details behind OracleStore and the gains that were realized with this alternative implementation. These include a reduction of 97%+ in the storage footprint of multiple tables, as well as query performance that is 13x faster than ObjectStore with DirectSQL and 46x faster than ObjectStore without DirectSQL.
SQL offers many powerful techniques for analyzing your data out of the box, but being also extendable if you are still missing something. Now much more easier in 18c with polymorphic table functions (PTF). As an evolution of table functions, PTF is invoked in the FROM clause and is capable to encapsulate the custom processing of the input data, whereas the input row type does not have to be known at design time and the output row type may first be determined by the actual PTF invocation parameters. This session will give you an introduction based on simple examples. Discover how you can develop your own flexible and self-describing extensions while focusing on business logic and leaving complex things like parallel execution to the database.
In Oracle 18c, SQL and PL/SQL work even more closely together. A good example are the Polymorphic Table Functions (PTF). With PTF, which are part of the ANSI SQL 2016 standard, you get a powerful and flexible tool to extend existing analytical capabilities of SQL. The basics of the new functionality is discussed as well as the implementation details shown by concrete examples.
Online Statistics Gathering for Bulk Loads - the official name of the feature - was introduced in Oracle 12.1. The idea is to gather optimizer statistics "on the fly" for direct path loads. Sounds good for ETL? In certain scenarios it makes sense but even then there are many points to consider so that it becomes a reliable part of your ETL processes. When exactly will it be working and when not? Do you prevent it yourself? Documented, undocumented cases, known bugs. Which statistics are gathered and which are not? What has to be considered with partitioned tables? Interval partitioning - special case?
Introduced in Oracle Database 12c, the new MATCH_RECOGNIZE clause allows pattern matching across rows and is often associated with Big Data, complex event processing, etc. Should SQL developers who are not (yet) faced with such tasks ignore it? No way! The new feature is powerful enough to simplify a lot of day-to-day tasks and to solve them in a new, simple and efficient way. The insight into a new syntax is given based on common examples, as finding gaps, merging temporal intervals or grouping on fuzzy criteria. Providing more straightforward approach for solving known problems, the new functionality is worth to be a part of every developer’s toolbox.
It is obvious that for bulk data processing performance is the key factor. It often means balancing well structured, maintainable, reusable and high-performance code.
Even though there are more features and optimizations that support bulk processing with each release of PL/SQL, the "Pure SQL" approach often leads to better performance.
Best practices, tips and tricks. How do I develop a complex SQL? How can Subquery Factoring help to increase readability? How does this help to test complex SQLs? What is a Row Generator? How do I change the cardinality of the original data set?
An unconventional approach for ETL of historized dataAndrej Pashchenko
Maintaining a data historization is a very common but time consuming task in a data warehouse environment. The common techniques used involve outer joins and some kind of change detection. This change detection must be done with respect of Null-values and is possibly the most trickiest part. But, on the other hand, SQL offers standard functionality with exactly desired behaviour: Group By or Partitioning with analytic functions. Can it be used for this task?
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
2. About me
• Working at Trivadis, Düsseldorf
• Focusing on Oracle:
• Data Warehousing
• Application Development
• Application Performance
• Course instructor „Oracle New Features for
Developers“
@Andrej_SQL blog.sqlora.com
3.
4. Parallel Processing in Oracle DB
Parallel
Processing
Parallel Query Parallel DDL Parallel DML
SELECT
• CTAS
• CREATE INDEX
• ALTER TABLE MOVE
• …
• Parallel IAS
• Parallel MERGE
• Parallel UPDATE
• Parallel DELETE
6. How to enable PDML
• Parallel Query and Parallel DDL are enabled by default
• Parallel DML has to be enabled first at system or session level:
• In 12c it is also possible with a hint at statement level :
• Issue with the hint: hard parse on every execution, caution with plan stability
• But enabling PDML doesn’t yet mean a parallel execution plan will be used
ALTER SESSION ENABLE PARALLEL DML;
INSERT /*+ enable_parallel_dml parallel append */
INTO sales
SELECT /*+ parallel */ * FROM sales_v;
7. How do I know PDML was used?
• Check the position of DML, e.g. LOAD AS SELECT, with respect to query coordinator
• Check the note
• Check v$pq_sesstat
---------------------------------------------
Operation | Name
---------------------------------------------
INSERT STATEMENT |
LOAD AS SELECT | T1
PX COORDINATOR |
PX SEND QC (RANDOM) | :TQ1000
OPTIMIZER STATISTICS GATHERING |
PX BLOCK ITERATOR |
TABLE ACCESS FULL | T2
---------------------------------------------
Note
- PDML disabled because object is not decorated with
parallel clause
---------------------------------------------
Operation | Name
---------------------------------------------
INSERT STATEMENT |
PX COORDINATOR |
PX SEND QC (RANDOM) | :TQ1000
LOAD AS SELECT (HYBRID TSM/HWMB)| T1
OPTIMIZER STATISTICS GATHERING |
PX BLOCK ITERATOR |
TABLE ACCESS FULL | T2
---------------------------------------------
SELECT * FROM v$pq_sesstat WHERE statistic like 'DML%';
STATISTIC LAST_QUERY SESSION_TOTAL CON_ID
------------------------------ ---------- ------------- ----------
DML Parallelized 1 3 0
8. How to ensure that PDML is used
• Statement level or object level PARALLEL hint in INSERT
• Forcing PDML in a session
• Auto DOP
• Parallel clause object decoration :
ALTER SESSION FORCE PARALLEL DML;
CREATE TABLE t_copy (…) PARALLEL;
ALTER TABLE t_copy PARALLEL;
INSERT /*+ parallel */ INTO t_copy t SELECT * FROM t_src;
INSERT /*+ parallel(t) */ INTO t_copy t SELECT * FROM t_src;
ALTER SESSION SET parallel_degree_policy = AUTO;
9. How to ensure that PDML is used (2)
• Refer to the Table „Parallelization Priority Order“
• But test your ETL scenario!
• In case of doubt, statement level hints have the highest priority
10. Restrictions preventing PDML
• No PDML on tables with triggers
• No PDML with enabled foreign keys. Use Reliable FK-constraints: valuable for CBO, but not
disruptive for ETL (RELY DISABLE NOVALIDATE). Exception: reference partitioning!
• Not enough parallel server
• Parallel DML is not supported on a table with bitmap indexes if the table is not partitioned.
IMPORTANT: For Partition Exchange Loading (PEL) don’t create any indexes on temporary table
before loading it!
11. Restrictions preventing PDML (2)
• Distributed transactions, DML on remote DB.
• Documentation 12.2 states:
• Indeed, this seems to work but doesn’t really make sense because DB link is always serial
SQL> insert /*+ enable_parallel_dml parallel */
into t_sdoc
select v.* from V_SDOC@remote_db V
2929218 rows created.
SQL> select * from v$pq_sesstat where statistic like 'DML%'
STATISTIC LAST_QUERY SESSION_TOTAL CON_ID
------------------- ---------- ------------- ----------
DML Parallelized 1 5 0
1 row selected.
-------------------------------------------------------
| Id | Operation | Name |
-------------------------------------------------------
| 0 | INSERT STATEMENT | |
| 1 | PX COORDINATOR | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 |
| 3 | LOAD AS SELECT (HYBRID TSM/HWMB)| |
| 4 | OPTIMIZER STATISTICS GATHERING | |
| 5 | PX RECEIVE | |
| 6 | PX SEND ROUND-ROBIN | :TQ10000 |
| 7 | REMOTE | V_SDOC |
-------------------------------------------------------
12. Implications of PDML
• PX-coordinator and each PX-Server are working in their own transactions
• The coordinator uses a two-phase commit then
• Hence, the user transaction is in a special mode
• The results of parallel modifications cannot be seen in the same transaction
• Complex ETL processes relying on transaction integrity could be a problem: no PDML can be used for
intermediate steps.
• The same error for serial direct path INSERT though, so you cannot use it as a reliable check of PDML being
used
SQL> select count(*) from t_sdoc
Error at line 0
ORA-12838: cannot read/modify an object after modifying it in parallel
14. Space Management with PDML
• Multiple concurrent transactions are modifying the same object
• What to consider doing Parallel Direct Path Insert?
• Can this lead to excessive extent allocation or tablespace fragmentation?
• It is helpful to have an idea of what happens behind the scenes.
• Fortunately, Oracle 12c makes more information visible
--------------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------------
| 0 | INSERT STATEMENT | |
| 1 | PX COORDINATOR | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 |
| 3 | LOAD AS SELECT (HYBRID TSM/HWMB)| T_COPY_PARALLEL |
| 4 | OPTIMIZER STATISTICS GATHERING | |
| 5 | PX BLOCK ITERATOR | |
| 6 | TABLE ACCESS FULL | T_SRC |
--------------------------------------------------------------
15. Uniform_TBS
Table1
• Tablespace with uniform extent size
• The unused space is inside the
extent
• Internal fragmentation
• Full Table Scans will scan this free
space too
• This free space can be used by
conventional inserts
• But doing PDML-Insert (direct path)
starts to fill a new extent every time
Uniform vs. System-Allocated Extents
All extents are equally sized
Unused space is „inside“
16. Autoallocate_TBS
Table1
Uniform vs. System-Allocated Extents
• Autoallocate
• 64K, 1M, 8M, 64M (8k block size)
• If free space is left after loading
(> min extent), extent trimming
happens and this free space is
returned back to the tablespace
• External fragmentation: free space is
not continuous and can potentially
be reused if smaller extents are
requested
8M 64M
8M 8M 8M
7M
Different extent sizes
Extents can be trimmed
1M
17. TBS
Table1
High Water Mark Loading (HWM)
• The server process has exclusive
access to the segment (table or
partition) and can insert into extents
above the HWM
• After commit the HWM is moved
and new data becomes visible
• Serial or parallel load with PKEY
distribution
Server Process
18. TBS
Table1
Temp Segment Merge (TSM) Loading
• Each PX Server is assigned and
populating its own temporary
segment
• Last extents can be trimmed
• Temp segments reside in the same
tablespace and are merged into the
target table by manipulating the
extent map on commit
• Very scalable but at least one extent
per PX-server
• Fragmentation possible because of
trimming
• In 12c rarely used when creating
partitioned tables
PX Slave PX Slave
Temp Segment Temp Segment
19. TBS
Table1
Temp Segment Merge (TSM) Loading
• Each PX Server is assigned and
populating its own temporary
segment
• Last extents can be trimmed
• Temp segments reside in the same
tablespace and are merged into the
target table by manipulating the
extent map on commit
• Very scalable but at least one extent
per PX-server
• Fragmentation possible because of
trimming
• In 12c rarely used when creating
partitioned tables
PX Slave PX Slave
20. TBS
Table1
High Water Mark Brokering (HWMB)
• Multiple PX servers may insert into
the same extent above the HWM,
which should then be “brokered”
• The brokering is implemented via
HV enqueue
• Results in fewer extents
• But less scalable
• Good for loading non-partitioned
tables or single partitions
PX Slave PX Slave
HV
Enqueue
21. RAC Instance 2RAC Instance 1
TBS
Table1
High Water Mark Brokering (HWMB)
• Scalability can become an issue with
high DOP, especially in a RAC
environment
PX Slave PX Slave
HV
Enqueue
PX Slave PX Slave
22. RAC Instance 2RAC Instance 1
Hybrid TSM/HWMB
• New in 12.1
• Each temporary segment has its own
HV enqueue which is only used by
local PX servers in case of RAC
• Fewer extents
• Improved scalability
PX Slave PX SlavePX Slave PX Slave
HV Enqueue HV Enqueue
TBS
Table1
Temp Segment Temp Segment
24. Data Loading Distribution
• Example:
• Join two equipartitioned tables T_SRC2 and T_SRC3
• Hash-Partitioned, 64 partitions
• 32 millions rows
INSERT /*+ append parallel */
INTO t_tgt_join t0 (OWNER, OBJECT_TYPE, OBJECT_NAME, LVL, FILLER)
SELECT t1.OWNER, t2.OBJECT_TYPE, t2.OBJECT_NAME, t1.LVL, t1.filler
FROM t_src3 t1 JOIN t_src2 t2
ON ( t1.OWNER = t2.OWNER AND t1.OBJECT_NAME = t2.OBJECT_NAME
AND t1.OBJECT_TYPE = t2.OBJECT_TYPE AND t1.lvl = t2.lvl);
25. Data Loading Distribution
• An example of joining two tables in
parallel
• Which PX Servers are actually
loading the result table?
• The same ones that are doing the
join?
• Another PX set? Should the data
then be redistributed again?
• It is where data loading distribution
matters
T1 T2
P001 P002
P003 P004
PX set reading T1,T2
and redistributing
PX set joining T1,T2
?
26. Data Loading Distribution
• Since 11.2 the hint PQ_DISTRIBUTE can be used to control load distribution
• NONE – no distribution, load is performed by the same PX-Servers
• PARTITION – distribution based on partitioning of target table
• RANDOM – round-robin distribution, useful for highly skewed data
• RANDOM_LOCAL – round-robin for PX servers on the same RAC instance
35. MERGE
• Basically, if PDML is turned on in a session and for particular statement, MERGE will
parallelize both the INSERT and UPDATE operations
• But there are some differences:
• No space management decoration is reported in the execution plan
• Even worse, it always seems to run as Temp Segment Merge.
• Significantly more extents are created
• Many of them are trimmed
• Every load operation starts again with many 64K extents
• Maybe it’s worth thinking about providing INITIAL and NEXT even for Autoallocate
tablespace
• Avoid MERGE if you don’t really need it (for example you materialize temporary results
anyway like ODI SCD Type 2 Knowledge Module does and could then update and insert in
two parallel operations).
36. Summary
• Don’t overuse PDML. Turn it on only selectively where it makes sense
• Be careful and double check that your statements are doing PDML
• Oracle reports the space management strategy for LOAD AS SELECT operations in
execution plans from 12.1.0.2, but not for MERGE operations
• Bloating extent map will have a negative effect on the parallel queries
• From 12c Oracle has introduced Hybrid TSM/HWMB which increases scalability but keeps
extent number small
• Don’t create indexes on tables for partition exchange, they can significantly influence
the execution plan. Bitmap indexes will even disable PDML!
• For the most critical loading processes check data distribution which you can influence
with PQ_DISTRIBUTE hint
• If using MERGE for critical ETL, check the space management behavior
37. Links
• Oracle Documentation, VLDB Guide, About Parallel DML Operations
• Nigel Bayliss, Space Management with PDML
• Randolf Geist, Understanding Parallel Execution - Part 1 and Part 2
• Randolf Geist, Hash Join Buffered
• Timur Akhmadeev, PQ_DISTRIBUTE Enhancement
• Jonathan Lewis, Autoallocate and PX