The document summarizes 11 new features in Oracle Database 11g Release 2. It discusses improvements to parallelism, analytics, external tables, recursive queries, and flashback features. Key points include automated parallel DML, improved analytic functions like LISTAGG, using external tables with preprocessors on directories, recursive queries with common table expressions, and enhanced time travel capabilities.
PostgreSQL Procedural Languages: Tips, Tricks and GotchasJim Mlodgenski
One of the most powerful features of PostgreSQL is its diversity of procedural languages, but with that diversity comes a lot of options.
Did you ever wonder:
- What all of those options are on the CREATE FUNCTION statement?
- How do they affect my application?
- Does my choice of procedural language affect the performance of my statements?
- Should I create a single trigger with IF statements or several simple triggers?
- How do I debug my code?
- Can I tell which line in my function is taking all of the time?
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeJeff Frost
Let's face it, major version upgrades can be a pain. Most people are familiar with the dump/restore method, but if your database happens to be larger than a few GB, the downtime required for dump/restore is likely going to exceed your maximum maintenance window.
We'll briefly explore upgrade options including dump/restore, pg_upgrade and logical replication tools like Slony and why I think Slony is currently the best option. Then we'll run through a tutorial on how to use slony for a major version upgrade with minimal downtime.
You can find the scripts used in the demos here:
https://github.com/jfrost/scale-15x-talk
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
PostgreSQL Procedural Languages: Tips, Tricks and GotchasJim Mlodgenski
One of the most powerful features of PostgreSQL is its diversity of procedural languages, but with that diversity comes a lot of options.
Did you ever wonder:
- What all of those options are on the CREATE FUNCTION statement?
- How do they affect my application?
- Does my choice of procedural language affect the performance of my statements?
- Should I create a single trigger with IF statements or several simple triggers?
- How do I debug my code?
- Can I tell which line in my function is taking all of the time?
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeJeff Frost
Let's face it, major version upgrades can be a pain. Most people are familiar with the dump/restore method, but if your database happens to be larger than a few GB, the downtime required for dump/restore is likely going to exceed your maximum maintenance window.
We'll briefly explore upgrade options including dump/restore, pg_upgrade and logical replication tools like Slony and why I think Slony is currently the best option. Then we'll run through a tutorial on how to use slony for a major version upgrade with minimal downtime.
You can find the scripts used in the demos here:
https://github.com/jfrost/scale-15x-talk
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
Les tests unitaires se sont pas limités au code des applications, des tests peuvent également être effectués sur les données et les schémas des bases de données.
Conférence donnée lors du meetup PostgreSQL le 22 juin 2016 à Nantes
In SQL joins are a fundamental concepts and many database engines have serious problems when it comes to joining many many tables.
PostgreSQL is a pretty cool database - the question is just: How many joins can it take?
It is known that Oracle does not accept insanely long queries and MySQL is known to core dump with 2000 tables.
This talk shows how to join 1 million tables with PostgreSQL.
Finding and fixing bugs is a major chunk of any developers time. This talk describes the basic rules for effective debugging in any language, but shows how the tools available in PHP can be used to find and fix even the most elusive error
Spencer Christensen
There are many aspects to managing an RDBMS. Some of these are handled by an experienced DBA, but there are a good many things that any sys admin should be able to take care of if they know what to look for.
This presentation will cover basics of managing Postgres, including creating database clusters, overview of configuration, and logging. We will also look at tools to help monitor Postgres and keep an eye on what is going on. Some of the tools we will review are:
* pgtop
* pg_top
* pgfouine
* check_postgres.pl.
Check_postgres.pl is a great tool that can plug into your Nagios or Cacti monitoring systems, giving you even better visibility into your databases.
PostgreSQL is a strong relational database which is highly capable of replication. As of PostgreSQL 9.4 streaming replication is only able to replicate an entire database instance.
Walbouncer allows to filter the PostgreSQL WAL and partial replicate single databases.
PGDay.Amsterdam 2018 - Stefan Fercot - Save your data with pgBackRestPGDay.Amsterdam
Ever heard of Point-in-time recovery? pgBackRest is an awsome tool to handle backups, restores and even helps you build streaming replication ! This talk will introduce the tool, its basic features and how to use it.
Les tests unitaires se sont pas limités au code des applications, des tests peuvent également être effectués sur les données et les schémas des bases de données.
Conférence donnée lors du meetup PostgreSQL le 22 juin 2016 à Nantes
In SQL joins are a fundamental concepts and many database engines have serious problems when it comes to joining many many tables.
PostgreSQL is a pretty cool database - the question is just: How many joins can it take?
It is known that Oracle does not accept insanely long queries and MySQL is known to core dump with 2000 tables.
This talk shows how to join 1 million tables with PostgreSQL.
Finding and fixing bugs is a major chunk of any developers time. This talk describes the basic rules for effective debugging in any language, but shows how the tools available in PHP can be used to find and fix even the most elusive error
Spencer Christensen
There are many aspects to managing an RDBMS. Some of these are handled by an experienced DBA, but there are a good many things that any sys admin should be able to take care of if they know what to look for.
This presentation will cover basics of managing Postgres, including creating database clusters, overview of configuration, and logging. We will also look at tools to help monitor Postgres and keep an eye on what is going on. Some of the tools we will review are:
* pgtop
* pg_top
* pgfouine
* check_postgres.pl.
Check_postgres.pl is a great tool that can plug into your Nagios or Cacti monitoring systems, giving you even better visibility into your databases.
PostgreSQL is a strong relational database which is highly capable of replication. As of PostgreSQL 9.4 streaming replication is only able to replicate an entire database instance.
Walbouncer allows to filter the PostgreSQL WAL and partial replicate single databases.
PGDay.Amsterdam 2018 - Stefan Fercot - Save your data with pgBackRestPGDay.Amsterdam
Ever heard of Point-in-time recovery? pgBackRest is an awsome tool to handle backups, restores and even helps you build streaming replication ! This talk will introduce the tool, its basic features and how to use it.
Some companies have to process data volumes that by far exceed the capacity of “small” database clusters and they definitely have a valid use case for one of the modern parallelizing / streaming / big data processing technologies. For all others, expressing transformations in plain SQL is just fine and PostgreSQL is the perfect workhorse for that purpose.
In this talk, I will go through some of our best practices for building fast, robust, and tested data integration pipelines inside PostgreSQL. I will explain many of our technical patterns, for example for schema management or for splitting large computations by chunking and table partitioning. And I will show how to apply standard software engineering techniques to maintain agility, consistency, and correctness.
Video: http://www.ustream.tv/recorded/109227465
Starting with the system calll "getrusage", this returns synchronous, process-level information, mainly max RSS used. This talk describes the output from getrusage, the rusage formatting utility in ProcStats, and several examples of using it to examine time and memory use.
Optional first & final outputs to give baseline and total status, differencing avoids extraneous output, and user messages allow arbitrary stat's and tracking content.
The combination makes this nice for tracking both long-lived and shorter, more intensive processing.
Palestra realizada no PHP Conference 2018 sobre recursos avançados SQL do PostgreSQL que aumentam drasticamente o desempenho e simplificam o desenvolvimento da aplicação.
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries.
Mark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries.
Introduces important facts and tools to help you get starting with performance improvement.
Learn to monitor and analyze important metrics, then you can start digging and improving.
Includes useful munin probes, predefined SQL queries to investigate your database's performance, and a top 5 of the most common performance problems in custom Apps.
By Olivier Dony - Lead Developer & Community Manager, OpenERP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
50. Flash Cache How it works 120 GB Flash Cache 16 GB SGA Memory Hot Data 1. Blocks read into buffer cache 2. Dirty blocks flushed to disk 360 GB Magnetic Disks Cold Data Extended Buffer Cache
51. Flash Cache How it works Extended Buffer Cache 120 GB Flash Cache 16 GB SGA Memory Hot Data Warm Data 1. Blocks read into buffer cache 3. Clean blocks moved to Flash Cache based on LRU* (once SGA is full) 2. Dirty blocks flushed to disk 360 GB Magnetic Disks Cold Data * Headers for Flash Cached blocks kept in SGA
52. Flash Cache Extended Buffer Cache 120 GB Flash Cache 16 GB SGA Memory Hot Data Warm Data 1. Blocks read into buffer cache 3. Clean blocks moved to Flash Cache based on LRU* 2. Dirty blocks flushed to disk 4. User Process reads blocks from SGA (copied from Flash Cache if not in SGA) 360 GB Magnetic Disks Cold Data * Headers for Flash Cached blocks kept in SGA
54. Automated Degree of Parallelism How it works SQL statement Statement is hard parsed And optimizer determines the execution plan Statement executes serially Statement executes in parallel Optimizer determines ideal DOP If estimated time greater than threshold Actual DOP = MIN(default DOP, ideal DOP) If estimated time less than threshold PARALLEL_MIN_TIME_THRESHOLD
55. Parallel Statement Queuing How it works SQL statements Statement is parsed and Oracle automatically determines DOP If enough parallel servers available execute immediately If not enough parallel servers available queue FIFO Queue 128 16 32 64 8 When the required number of parallel servers become available the first stmt on the queue is dequeued and executed 128 16 32 64
56. In-Memory Parallel Execution How it works SQL statement Determine the size of the table being looked at Fragments of Table are read into each node’s buffer cache Read into the buffer cache on any node Table is extremely small Always use direct read from disk Table is a good candidate for In-Memory Parallel Execution Table is extremely Large Only parallel server on the same RAC node will access each fragment
58. Edition-based Redefinition! Yes, this is here twice But only because It is the killer feature Of Oracle Database 11g Release 2 It is worth 2 features
When a SQL statement is executed it will be hard parsed and a serial plan will be developed The expected elapse time of that plan will be examined. If the expected Elapse time is Less than PARALLEL_MIN_TIME_THRESHOLD then the query will execute serially. If the expected Elapse time is greater than PARALLEL_MIN_TIME_THRESHOLD then the plan Will be re-evaluated to run in parallel and the optimizer will determine the ideal DOP. The Optimizer automatically determines the DOP based on the resource required for all scan operations (full table scan, index fast full scan and so on) However, the optimizer will cap the actual DOP for a statement with the default DOP (parallel_threads_per_cpu X CPU_COUNT X INSTANCE_COUNT), to ensure parallel Processes do not flood the system.
Step 1: SQL Statement is issued and Oracle automatically determines if It will run in parallel and if so what DOP it will get. Step 2: Oracle checks to see if there are enough PQ servers to execute The query. If there are it will execute immediately if there are not then The query will be queued in a FIFO Step 3: Everyone waits in the queue for the statement at the top of the Queue to get all of his requested PQ servers even if there are enough PQ servers available for one of the other statements to run ?????? Step 4: When enough PQ servers are available the statement is De-queued and allowed to execute.