The Challenges of Distributing Postgres: A Citus StoryHanna Kelman
Set theory forms the basis for relational algebra and relational databases, and SQL is the lingua franca of modern RDBMS’s. Even with all the attention given to NoSQL in recent years, the lion share of database usage remains relational. But until recently, nearly all relational database solutions have been limited to the resources of a single node. Not anymore.
This talk is about my team’s journey tackling the challenges of distributing SQL. Specifically in the context of my favorite (open source) database: Postgres. I believe that too many developers spend too much time worrying about scaling their databases. So at Citus Data, we created an extension to Postgres that enables developers to scale out compute, memory, and storage by distributing queries across a cluster of nodes.
This talk describes the distributed systems challenges we faced at Citus in scaling out Postgres—and how we addressed them. I’ll talk about how we use PostgreSQL’s extension APIs to parallelize queries in a distributed cluster. I’ll cover the architecture of a distributed query planner and specifically how the join order planner has to choose between broadcast, co-located, and repartition joins in order to minimize network I/O. And if there’s time, I’ll walk through the dynamic executor logic that we built. The end result: a distributed database and a lot less time spent worrying about scale.
Performance improvements in PostgreSQL 9.5 and beyondTomas Vondra
Let's see what major performance improvements PostgreSQL 9.5 brings, measure the impact on simple examples and also briefly look at improvements likely to appear in PostgreSQL 9.6 or some of the following releases.
MySQL exposes a collection of tunable parameters and indicators that is frankly intimidating. But a poorly tuned MySQL server is a bottleneck for your PHP application scalability. This session shows how to do InnoDB tuning and read the InnoDB status report in MySQL 5.5.
The Challenges of Distributing Postgres: A Citus StoryHanna Kelman
Set theory forms the basis for relational algebra and relational databases, and SQL is the lingua franca of modern RDBMS’s. Even with all the attention given to NoSQL in recent years, the lion share of database usage remains relational. But until recently, nearly all relational database solutions have been limited to the resources of a single node. Not anymore.
This talk is about my team’s journey tackling the challenges of distributing SQL. Specifically in the context of my favorite (open source) database: Postgres. I believe that too many developers spend too much time worrying about scaling their databases. So at Citus Data, we created an extension to Postgres that enables developers to scale out compute, memory, and storage by distributing queries across a cluster of nodes.
This talk describes the distributed systems challenges we faced at Citus in scaling out Postgres—and how we addressed them. I’ll talk about how we use PostgreSQL’s extension APIs to parallelize queries in a distributed cluster. I’ll cover the architecture of a distributed query planner and specifically how the join order planner has to choose between broadcast, co-located, and repartition joins in order to minimize network I/O. And if there’s time, I’ll walk through the dynamic executor logic that we built. The end result: a distributed database and a lot less time spent worrying about scale.
Performance improvements in PostgreSQL 9.5 and beyondTomas Vondra
Let's see what major performance improvements PostgreSQL 9.5 brings, measure the impact on simple examples and also briefly look at improvements likely to appear in PostgreSQL 9.6 or some of the following releases.
MySQL exposes a collection of tunable parameters and indicators that is frankly intimidating. But a poorly tuned MySQL server is a bottleneck for your PHP application scalability. This session shows how to do InnoDB tuning and read the InnoDB status report in MySQL 5.5.
Test: DML with NOLOGGING
NOLOGGING: Oracle will generate a minimal number of redo log entries in order to protect
the data dictionary, and the operation will probably run faster. Logging can be disabled at
the table level or the tablespace level.
See conference video - http://www.lucidimagination.com/devzone/events/conferences/ApacheLuceneEurocon2011
In this talk, Lucene/Solr committer Mark Miller will discuss some of the new features and advancements that users can look forward to in Solr 4. The list of topics will include: performance optimizations, further support for near-realtime search, SolrCloud, DirectSolrSpellChecker, and more.
ConFoo MySQL Replication Evolution : From Simple to Group ReplicationDave Stokes
MySQL Replication has been around for many years but how wee do you under stand it? Do you know about read/write splitting, RBR vs SBR style replication, and InnoDB cluster?
Hello Everyone ! Hope everybody doing good in their work and with their busy life.
Today i am listing down some interesting ORA- errors which i found recently as a Beginner, My Good Luck i have solved those too. So, here i am Listing down the errors with solutions.
It happens when you work with oracle, you may face or might be facing.
So, guys ! Be fearless. Have a look over it. If you need any help, Please Please let me know..
Thankyou.
KSQL – An Open Source Streaming Engine for Apache KafkaKai Wähner
The rapidly expanding world of stream processing can be daunting, with new concepts such as various types of time semantics, windowed aggregates, changelogs, and programming frameworks to master. KSQL is an open-source, Apache 2.0 licensed streaming SQL engine on top of Apache Kafka which aims to simplify all this and make stream processing available to everyone. The project is managed and open sourced by Confluent.
KSQL makes it easy to read, write, and process streaming data in real-time, at scale, using SQL-like semantics. It offers an easy way to express stream processing logic as an alternative to writing an application in a programming language such as Java, Python or Go. Benefits of using KSQL include: No coding required; no additional analytics cluster needed; streams and tables as first-class constructs; access to the rich Kafka ecosystem.
This session introduces the concepts and architecture of KSQL. Use cases such as Streaming ETL, Real Time Stream Monitoring or Anomaly Detection are discussed. A live demo shows how to setup and use KSQL quickly and easily on top of your Kafka ecosystem.
Test: DML with NOLOGGING
NOLOGGING: Oracle will generate a minimal number of redo log entries in order to protect
the data dictionary, and the operation will probably run faster. Logging can be disabled at
the table level or the tablespace level.
See conference video - http://www.lucidimagination.com/devzone/events/conferences/ApacheLuceneEurocon2011
In this talk, Lucene/Solr committer Mark Miller will discuss some of the new features and advancements that users can look forward to in Solr 4. The list of topics will include: performance optimizations, further support for near-realtime search, SolrCloud, DirectSolrSpellChecker, and more.
ConFoo MySQL Replication Evolution : From Simple to Group ReplicationDave Stokes
MySQL Replication has been around for many years but how wee do you under stand it? Do you know about read/write splitting, RBR vs SBR style replication, and InnoDB cluster?
Hello Everyone ! Hope everybody doing good in their work and with their busy life.
Today i am listing down some interesting ORA- errors which i found recently as a Beginner, My Good Luck i have solved those too. So, here i am Listing down the errors with solutions.
It happens when you work with oracle, you may face or might be facing.
So, guys ! Be fearless. Have a look over it. If you need any help, Please Please let me know..
Thankyou.
KSQL – An Open Source Streaming Engine for Apache KafkaKai Wähner
The rapidly expanding world of stream processing can be daunting, with new concepts such as various types of time semantics, windowed aggregates, changelogs, and programming frameworks to master. KSQL is an open-source, Apache 2.0 licensed streaming SQL engine on top of Apache Kafka which aims to simplify all this and make stream processing available to everyone. The project is managed and open sourced by Confluent.
KSQL makes it easy to read, write, and process streaming data in real-time, at scale, using SQL-like semantics. It offers an easy way to express stream processing logic as an alternative to writing an application in a programming language such as Java, Python or Go. Benefits of using KSQL include: No coding required; no additional analytics cluster needed; streams and tables as first-class constructs; access to the rich Kafka ecosystem.
This session introduces the concepts and architecture of KSQL. Use cases such as Streaming ETL, Real Time Stream Monitoring or Anomaly Detection are discussed. A live demo shows how to setup and use KSQL quickly and easily on top of your Kafka ecosystem.
AMIS organiseerde op maandagavond 15 juli het seminar ‘Oracle database 12c revealed’. Deze avond bood AMIS Oracle professionals de eerste mogelijkheid om de vernieuwingen in Oracle database 12c in actie te zien! De AMIS specialisten die meer dan een jaar bèta testen hebben uitgevoerd lieten zien wat er nieuw is en hoe we dat de komende jaren gaan inzetten!
Deze presentatie is deze avond gegeven in de vorm van een parallelsessie.
SQL Track: Restoring databases with powershellITProceed
Build plan of approach to structured point in time restores of databases ( e.g. from Production to QA ) using Powershell as an easy helper tool to ensure all steps are being performed.
PHP classes in mumbai, Introduction to PHP/MYSQL..
best PHP/MYSQL classes in mumbai with job assistance.
our features are:
expert guidance by IT industry professionals
lowest fees of 5000
practical exposure to handle projects
well equiped lab
after course resume writing guidance
For more Visit: http://vibranttechnologies.co.in/php-classes-in-mumbai.html or http://phptraining.vibranttechnologies.co.in
Streams Don't Fail Me Now - Robustness Features in Kafka StreamsHostedbyConfluent
"Stream processing applications can experience downtime due to a variety of reasons, such as a Kafka broker or another part of the infrastructure breaking down, an unexpected record (known as a poison pill) that causes the processing logic to get stuck, or a poorly performed upgrade of the application that yields unintended consequences.
Apache Kafka's native stream processing solution, Kafka Streams, has been successfully used with little or no downtime in many companies. This has been made possible by several robustness features built into Streams over the years and best practices that have evolved from many years of experience with production-level workloads.
In this talk, I will present the unique solutions the community has found for making Streams robust, explain how to apply them to your workloads and discuss the remaining challenges. Specifically, I will talk about standby tasks and rack-aware assignments that can help with losing a single node or a whole data center. I will also demonstrate how custom exception handlers and dead letter queues can make a pipeline more resistant to bad data. Finally, I will discuss options to evolve stream topologies safely."
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Database & Technology 1 _ Clancy Bufton _ Flashback Query - oracle total recall and some of the uses.pdf
1. Flashback
Query
Total
Recall
and
some
of
it’s
uses
By
Clancy
Bu*on
Park
Lane
IT
2. Introduc:on
• Clancy
Bu*on
–
Oracle
DBA
at
Park
Lane
IT
• 7
years
experience
as
an
Oracle
Database
Administrator
working
for
clients
in
government
and
uDliDes
• Clancy.Bu*on@parklane.com.au
3. Flashback
Query
• Available
since
version
9.2
– Implemented
on
Oracles
UNDO
based
read
consistency
• Select
statement
has
two
new
clauses
• AS
of
– Returns
the
enDre
table
as
it
existed
at
a
point
in
Dme
• Versions
Between
– returns
all
commiOed
versions
of
rows
that
existed
between
two
points
in
Dme
4. Flashback
Query
Examples
• select
*
from
scott.emp
as
of
timestamp
sysdate
-‐
interval
'60'
minute;
• select
*
from
scott.emp
versions
between
timestamp
sysdate
-‐
interval
'60'
minute
and
sysdate
-‐
interval
'5'
minute;
5. Flashback
Query
•
Versions
Pseudo
Columns
Used
with
versions
between
– VERSIONS_STARTSCN
and
VERSIONS_STARTTIME
•
StarDng
System
Change
Number
(SCN)
or
TIMESTAMP
when
the
row
version
was
created.
This
pseudocolumn
idenDfies
the
Dme
when
the
data
first
had
the
values
reflected
in
the
row
version.
Use
this
pseudocolumn
to
idenDfy
the
past
target
Dme
for
Oracle
Flashback
Table
or
Oracle
Flashback
Query.
If
this
pseudocolumn
is
NULL,
then
the
row
version
was
created
before
start.
– VERSIONS_ENDSCN
and
VERSIONS_ENDTIME
•
SCN
or
TIMESTAMP
when
the
row
version
expired.
If
the
pseudocolumn
is
NULL,
then
either
the
row
version
was
current
at
the
Dme
of
the
query
or
the
row
corresponds
to
a
DELETE
operaDon.
– VERSIONS_XID
• For
each
version
of
each
row,
returns
the
transacDon
ID
(a
RAW
number)
of
the
transacDon
that
created
the
row
version.
– VERSIONS_OPERATION
•
OperaDon
performed
by
the
transacDon:
I
for
inserDon,
D
for
deleDon,
or
U
for
update.
The
version
is
that
of
the
row
that
was
inserted,
deleted,
or
updated;
that
is,
the
row
a*er
an
INSERT
operaDon,
the
row
before
a
DELETE
operaDon,
or
the
row
affected
by
an
UPDATE
operaDon.
6. Versions
Pseudo
Column
Query
Example
• Can
be
used
in
the
select
list,
the
where
clause
and
order
by
clause
• Select
versions_startscn
,versions_endscn
,versions_operation
,emp.*
from
scott.emp
versions
between
timestamp
sysdate
-‐
interval
'20'
hour
and
sysdate
-‐
interval
'5'
minute
emp
where
versions_operation='U'
order
by
versions_startscn;
7. Total
Recall
• Total
Recall
is
new
in
version
11
• Introduces
Flashback
Archives
to
the
database
• Built
on
ParDDoning
and
Advanced
Compression
technology
• Introduces
a
new
background
process
FBDA
8. Flashback
Archive
• New
privilege
FLASHBACK
ARCHIVE
ADMINISTER
• New
DDL
statement
– create
flashback
archive
flba1
tablespace
flash_archives
retention
10
day;
• Creates
archive
tables
in
a
normal
tablespace
automaDcally
• Enabled
per
table
– alter
table
scott.emp
flashback
archive
flba1;
10. Flashback
Archive
-‐
Internal
Tables
• Enabling
flashback
archive
on
a
table
automaDcally
creates
three
new
tables
in
the
schema
• The
table
name
is
formed
by
SYS_FBA_<purpose>_<FBDA_object_iden3fier>
• SYS_FBA_DDL_COLMAP_<object_id>
– Records
current
and
past
columns
that
existed
on
the
base
table
(supports
DDL
on
the
base
table)
• SYS_FBA_HIST_<object_id>
– Contains
the
actual
historical
values
• SYS_FBA_TCRV_<object_id>
– Maps
start
and
end
SCN
to
rowids
in
the
base
table
to
idenDfy
the
current
version
of
the
row
14. Historical
Indexes
• Indexes
on
base
table
aren’t
mirrored
on
history
tables
• Indexes
can
be
created
on
history
tables
• E.g.
local
prefixed
index
created
for
start
and
end
scn
CREATE
INDEX
"SCOTT"."EMP_SCN_INDEX"
ON
"SCOTT"."SYS_FBA_HIST_73257"
(
"ENDSCN",
"STARTSCN"
)
local;
15. Par::oning
and
Compression
• PARTITION
BY
RANGE
clause
on
SCN
column
of
SYS_FBA_HIST
tables
– Oracle
ParDDoning
opDon
• COMPRESS
FOR
OLTP
clause
on
SYS_FBA_HIST
tables
– Oracle
Advanced
Compression
opDon
16. Flash
Back
Data
Archiver
(FBDA)
• Is
a
new
background
process
• Maintains
the
Flashback
archives
• FBDA
archives
the
historical
rows
of
tracked
tables
into
flashback
data
archives.
• FBDA
is
also
responsible
for
automaDcally
managing
the
flashback
data
archive
for
space,
organizaDon,
and
retenDon
and
keeps
track
of
how
far
the
archiving
of
tracked
transacDons
has
occurred.
17. Flash
Back
Data
Archiver
(FBDA)
• Dynamic
based
on
DML
workload
• AutomaDcally
spawns
parallel
slaves
• Run
asynchronously
by
default
every
5
minutes
• Runs
more
frequently
depending
on
workload
• Reads
UNDO
buffers
from
cache,
or
from
disk
if
they
have
aged
out.
18. DDL
Support
• All
DDL
is
supported
in
11.2
• For
complex
schema
changes
dbms_flashback
• disassociate_sa
and
reassociate_sa
• Manual
changes
can
be
made
a*er
disassociaDon
to
base
table
and
history
table
• ReassociaDon
can
only
occur
if
base
table
and
history
table
schema
is
the
same
19. DBMS_FLASHBACK_ARCHIVE
• begin
dbms_flashback_archive.disassociate_sa('SCOTT','EMP');
end;
/
• begin
dbms_flashback_archive.reassociate_sa('SCOTT','EMP');
end;
/
20. DML
Support
• FBDA
supports
parallel
DML
• DML
cannot
be
performed
on
history
table
by
a
user
• Except
when
it
is
disassociated
from
the
base
table
21. Uses
for
Total
Recall
• AudiDng
– Provides
a
tamper
proof
historical
record
of
all
changes
• Row
based
recovery
– Can
be
used
to
recover
individual
rows
by
updaDng
back
to
a
previous
value
• Change
data
capture
– Fine
grained
change
capture
for
data
warehouse
extracts
22. Cau:on
• Seung
a
very
long
UNDO_RETENTION
in
place
of
using
flashback
archives
• Asynchronous
FBDA
–
changes
may
not
be
visible
to
a
flashback
archive
query
for
several
minutes
a*er
commit
• Global
indexes
on
history
tables
– When
FBDA
automaDcally
maintains
the
parDDons
the
enDre
index
will
be
invalidated
and
need
to
be
rebuilt.
• Don’t
use
in
11.1
– Versions
between
semi
funcDonal
– No
DDL
support
– Many
bugs
– FBDA
no
parallel
DML