This document discusses PostgreSQL performance on multi-core systems with multi-terabyte data. It covers current market trends towards more cores and larger data sizes. Benchmark results show that PostgreSQL scales well on inserts up to a certain number of clients/cores but struggles with OLTP and TPC-E workloads due to lock contention. Issues are identified with sequential scans, index scans, and maintenance tasks like VACUUM as data sizes increase. The document proposes making PostgreSQL utilities and tools able to leverage multiple cores/processes to improve performance on modern hardware.
29回勉強会資料「PostgreSQLのリカバリ超入門」
See also http://www.interdb.jp/pgsql (Coming soon!)
初心者向け。PostgreSQLのWAL、CHECKPOINT、 オンラインバックアップの仕組み解説。
これを見たら、次は→ http://www.slideshare.net/satock/29shikumi-backup
29回勉強会資料「PostgreSQLのリカバリ超入門」
See also http://www.interdb.jp/pgsql (Coming soon!)
初心者向け。PostgreSQLのWAL、CHECKPOINT、 オンラインバックアップの仕組み解説。
これを見たら、次は→ http://www.slideshare.net/satock/29shikumi-backup
This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...javier ramirez
En esta sesión voy a contar las decisiones técnicas que tomamos al desarrollar QuestDB, una base de datos Open Source para series temporales compatible con Postgres, y cómo conseguimos escribir más de cuatro millones de filas por segundo sin bloquear o enlentecer las consultas.
Hablaré de cosas como (zero) Garbage Collection, vectorización de instrucciones usando SIMD, reescribir en lugar de reutilizar para arañar microsegundos, aprovecharse de los avances en procesadores, discos duros y sistemas operativos, como por ejemplo el soporte de io_uring, o del balance entre experiencia de usuario y rendimiento cuando se plantean nuevas funcionalidades.
This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...javier ramirez
En esta sesión voy a contar las decisiones técnicas que tomamos al desarrollar QuestDB, una base de datos Open Source para series temporales compatible con Postgres, y cómo conseguimos escribir más de cuatro millones de filas por segundo sin bloquear o enlentecer las consultas.
Hablaré de cosas como (zero) Garbage Collection, vectorización de instrucciones usando SIMD, reescribir en lugar de reutilizar para arañar microsegundos, aprovecharse de los avances en procesadores, discos duros y sistemas operativos, como por ejemplo el soporte de io_uring, o del balance entre experiencia de usuario y rendimiento cuando se plantean nuevas funcionalidades.
QuestDB: ingesting a million time series per second on a single instance. Big...javier ramirez
In this session I will show you the technical decisions we made when building QuestDB, the open source, Postgres compatible, time-series database, and how we can achieve a million row writes per second without blocking or slowing down the reads.
This technical presentation by EDB Dave Thomas, Systems Engineer provides an overview of:
1) BGWriter/Writer Process
2) Wall Writer Process
3) Stats Collector Process
4) Autovacuum Launch Process
5) Syslogger Process/Logger process
6) Archiver Process
7) WAL Send/Receive Processes
SQream DB - Bigger Data On GPUs: Approaches, Challenges, SuccessesArnon Shimoni
This talk will present SQream’s journey to building an analytics data warehouse powered by GPUs. SQream DB is an SQL data warehouse designed for larger than main-memory datasets (up to petabytes). It’s an on-disk database that combines novel ideas and algorithms to rapidly analyze trillions of rows with the help of high-throughput GPUs. We will explore some of SQream’s ideas and approaches to developing its analytics database – from simple prototype and tech demos, to a fully functional data warehouse product containing the most important features for enterprise deployment. We will also describe the challenges of working with exotic hardware like GPUs, and what choices had to be made in order to combine the CPU and GPU capabilities to achieve industry-leading performance – complete with real world use case comparisons.
As part of this discussion, we will also share some of the real issues that were discovered, and the engineering decisions that led to the creation of SQream DB’s high-speed columnar storage engine, designed specifically to take advantage of streaming architectures like GPUs.
Based on the popular blog series, join me in taking a deep dive and a behind the scenes look at how SQL Server 2016 “It Just Runs Faster”, focused on scalability and performance enhancements. This talk will discuss the improvements, not only for awareness, but expose design and internal change details. The beauty behind ‘It Just Runs Faster’ is your ability to just upgrade, in place, and take advantage without lengthy and costly application or infrastructure changes. If you are looking at why SQL Server 2016 makes sense for your business you won’t want to miss this session.
Presentation on the long forgotten core performance principles of a DBMS's process based software architecture. And how modern application development violates all these principles.
Similar to Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data (20)
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. In this session, we will learn more about PostgreSQL extension framework, how are they built, look at some popular extensions, management of these extensions in your deployments.
Deep Dive into RDS PostgreSQL UniverseJignesh Shah
In this session we will deep dive into the exciting features of Amazon RDS for PostgreSQL, including new versions of PostgreSQL releases, new extensions, larger instances. We will also show benchmarks of new RDS instance types, and their value proposition. We will also look at how high availability and read scaling works on RDS PostgreSQL. We will also explore lessons we have learned managing a large fleet of PostgreSQL instances, including important tunables.
My experience with embedding PostgreSQLJignesh Shah
At my current company, we embed PostgreSQL based technologies in various applications shipped as shrink-wrapped software. In this session we talk about the experience of embedding PostgreSQL where it is not directly exposed to end-user and the issues encountered on how they were resolved.
We will talk about business reasons,technical architecture of deployments, upgrades, security processes on how to work with embedded PostgreSQL databases.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
1. Problems with PostgreSQL on
Multi-core Systems with Multi-
Terabyte Data
Jignesh Shah and
PostgreSQL Performance Team @ Sun
Sun Microsystems Inc
PGCon May 2008 - Ottawa
2. PostgreSQL Performance Team
@Sun
• Staale Smedseng
• Magne Mahre
• Paul van den Bogaard
• Lars-Erik Bjork
• Robert Lor
• Jignesh Shah
• Guided by Josh Berkus
3. Agenda
• Current market trends
• Impact on workload with PostgreSQL on multicore
systems
> PGBench Scalability
> TPCE- Like Scalability
> IGEN Scalability
• Impact with multi-terabyte data running PostgreSQL
• Tools and Utilities for DBA
• Summary Next Steps
5. Current Market Trends in Systems
• Quad-core sockets are current market standards
> Also 8-core sockets available now and could become a
standard in next couple of year
• Most common rack servers now have two sockets
> 8-core (or more ) systems are the norm with trend going
to 12-16 core systems soon
• Most Servers have internal drives > 146 GB
> Denser in capacity, smaller in size but essentially same
or lower speed
> More denser in case of SATA-II drives
6. Current Market Trends in Software
• Software (including Operating Systems) have yet to
fully catch up with multi-core systems
> “tar” still single process utility
• Horizontal Scaling helps a lot but not a good clean
solution for multi-core systems
• Virtualization is the new buzzword for Consolidations
> Hides the fact that the software is not able to fully
capitalize the extra cores :-(
• Research being done on new paradigms
> Complexity of parallelized software is huge
7. Current Market Trends in Data
• 12 years ago, a 20GB data warehouse was
considered a big database
• Now everybody talks about 200GB-5TB databases
• Some 2005 Survey numbers:
> Top OLTP DB sizes = 5,973 GB to 23,101 GB
> Top DW DB Sizes = 17,685 GB to 100,386 GB
> Source http://www.wintercorp.com/VLDB/2005_TopTen_Survey/TopTenWinners_2005.asp
• Some 2007 Survey numbers:
> Top DB sizes = 20+ TB to 220 TB ( 6+ PB on tape)
> Source http://www.businessintelligencelowdown.com/2007/02/top_10_largest.html
10. PGBench ( inserts only)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
0
2000
4000
6000
8000
10000
12000
Linear tps (async) tps (commit_sibling=5) tps(default)
Number of Clients
Transactionspersecond(tps)
• IOPS on logs during regular pgbench run is around 800 w/s which means transaction optimizations happening somewhere
• With commit_delay (sibling =5) at 16 clients IOPS on logs is 102 w/sec which means quite a bit of capacity on logs yet
• With synchronous_commit=off wal_writer_delay=100ms, the iops on the log devices is 10 w/sec
• Same performance with wal_writer_delay=10ms (50w/sec on logs) and wal_writer_delay=1ms (100w/sec on logs)
11. PGBench ( inserts only) – Take II
1 2 4 8 16 32 64 128
400
4000
40000
400000
Linear tps (async) tps (commit_sibling=5) tps(default)
Number of Clients
Transactionspersecond(tps)
• At 128 clients system (16cores) was 58% idle (commit) , 56% idle (async)
• As more and more clients are added eventually performance seems to converge
• Runs with 256 clients and beyond using pgbench Clients pgbench running on a different server becomes cpu core limited
and hence those results are not useful
12. TPC-E Like Workload with PG 8.3
1 2 3 4 5 6 7 8 9
Idle-CPU TPC-E Linear Clients
Number of x64 Cores
Performance
• At 7 cores number of clients increases beyond 110
• Increasing more cores beyond 8 doesn't seem to help much in terms of performance
• Quick dtrace shows ProcArrayLock Exclusive waits increasing while committing transactions at high loads
13. Some thoughts
• Two main Blocks for PostgreSQL Backend:
> READS
> LOCK WAITS (Top few):
> ProcArray Exclusive
> Dynamic Locks (Shared) IndexScan
> Dynamic Locks (Exclusive) InsertIndexTuples
• Reducing and modifying various indexes increased
performance more than 2-3X but still limited due to
core indexes required
• Haven't filtered out lock spins which keeps CPU busy
(results in higher CPU utilization with more core without
appropriate increase in throughput rates)
14. IGEN with PostgreSQL on T2000
32 64 128 256 512 1024 1280 1536
0
20000
40000
60000
80000
100000
120000
140000
160000
iGen OLTP, 4 minute averages, varying thinktimes [ms], 32 HW-threads
1ms think 5ms think 10ms think 50ms think 100ms think 200ms think
Clients
TPM
• Sun Enterprise T2000 has 32 hardware threads of CPU
• Data and log files on RAM (/tmp)
• Database reloaded everytime before the run
15. IGEN with PostgreSQL on T2000
• First number is LockID, (Only 2 different locks pop up: ProcArrayLock == 4; WALWriteLock == 8)
• Second is Mode: S(hared)or E(xclusive) mode
• Third is Function P(arse), B(ind), E(xecute).
• Example: procarraylock in S(hared) mode while doing a B(ind) operation will be reflected in the graph 4SB
16. OLTP Workload on PostgreSQL
• Top Light Weight Locks having increasing wait times
as connections increases
> ProcArrayLock
> EXCLUSIVE - Called from ProcArrayEndTransaction() from
CommitTransaction()
> SHARED - Called from GetSnapShotData()
> WALWriteLock
> XlogFlush()
> WALInsertLock
> XLogInsert()
•
17. ProcArray LWLock Thoughts
• ProcArray Lock currently has one wait list
> If it encounters SHARED, it looks if the following wait-listed
process is SHARED or not if it is wakes them up together
> If it Encounters EXCLUSIVE, it just wakes up that process
• Multiple SHARED process can execute
simultaneously on multi-core,
> maybe a two wait-list (SHARED, EXCLUSIVE) or
something schedule SHARED requests together might
improve utilization
• Won't help EXCLUSIVE (which is the main problem)
> Reduce Code Path
> Use some sort of Messaging to synchronize
18. WALWriteLock & WALInsertLock
• WALWriteLock can be controlled
> commit_delay=10 (at expense of latency of individual commit)
> synchronous_commit = off (for non-mission critical types)
• WALInsertLock (writing into the WAL buffers)
eventually still is a problem
> Even after increasing WAL Buffers, its single write lock
architecture makes it a contention point
• Making WALInsertLock more granular will certainly
help scalability
> Some discussion on reserving WAL Buffer space and
releasing locks earlier
20. Some Observations
• Its easy to reach a terabyte even with OLTP
environments
• Even a single socket run for TPC-E could result
close to about 1 TB data population
> http://tpc.org/tpce/tpce_price_perf_results.asp
• In some sense you can work around “writes” but
“read” will block and random read can have real
poor response time bigger the database size
• Blocking Reads specially while holding locks is
detrimental to performance and scaling
•
21. Impact on Sequential Scan
• Sequential Scan rate impact depends on not only on
storage hardware but also CPU intense functions
which depends on updates done to table since last
vacuum
> Types of functions with high CPU usage during sequential reads:
HeapTupleSatisfiesMVCC (needs Vacuum to avoid this CPU
cost), heapgettup_pagemode, advance_* (count() fuction)
> Blocking reads and then CPU intense functions results in
inefficient usage of system resources which should be separated
in two separate processes if possible
> Hard to predict rate of scan during Sequential scan with
PostgreSQL
> Example: Before Vacuum: Sequential scan takes 216.8sec
> After Vacuum: Same sequential scan takes 120.2sec
22. Impact on Index Range Scan
• Similar to sequential scan except still slower
• High CPU usage functions include index_getnext(),
_bt_checkkeys, HeapTupleSatisfiesMVCC,
pg_atomic_cas (apart from BLOCKS happening with
read)
• Slow enough to cause performance problems
> 26 reads/sec on index and 1409 reads/sec on table during a
sample index range scan (with file system buffer on) Its really
reads on tables that kills the range scan even when SQL only
refers to columns in the index
> 205 seconds Vs 102 seconds (via sequential) while doing primary
key range scan
•
24. Think about the DBA
• Multicore systems means more end users using the
database
• More pressure on DBA to keep the scheduled downtime
window small
• Keeping DBA's guessing (“is it done yet?”) while running
maintenance commands is like testing the breaking point of
his patience
• Example: VACUUM FULL -
> Customer (DBA) reported it took 18 hrs to vacuum 3.4TB
> VACUUM is just an example, all maintenance commands
need to be multi-core aware designed to handle multi-
terabyte data efficiently
25. Tools Utilities
• Tools are generally used more as a single task at a time
• Problems with Tools using a Single Process approach
Compute Intensive IO Intensive
Maxes out 1 cpu/core/thread at a time
Wasted CPU Resources
Wated IO Resources
Uses 1 cpu/core/thread at a time
Wasted CPU Resources
Resulting System Utilization very poor
Most people do not run other tasks while
doing maintenance jobs
*No indication when it will finish*
26. BACKUP Performance
• pg_dump dbname
> CPU limited with hot functions _ndoprnt, CopyAttributeOut,
CopyOneRowTo, memcpy
> Processing about 36MB/sec when CPU is saturated
> Multiple pg_dump process could give about 91MB/sec
which means if additional cores are used it could
effectively help speed up backup
• Same goes for pg_recovery
27. VACUUM Performance
• We saw earlier state of last VACUUM is important for
performance which means VACUUM is needed (apart from
XID rollover)
• However VACUUM itself is very inefficient if there are
cost_delays set
> Sample run on about 15GB table with vacuum_cost_delay=50:
> CPU utilization : 2% avg
> Took 3:03:39 @ 1.39 MB/sec
> Hot functions: heap_prune_chain(30%), lazy_scan_heap(14%),
HeapTupleSatisfiesVacuum(14%)
• A heavily updated table can result in a bigger downtime just to get
VACUUM completed on it
28. VACUUM Performance
• If costs for auto_vacuum are controlled and let DBA initiated
VACUUM go full speed then (cost_limit=1, cost_delay=0)
• Hot functions include bsearch
> Sample run on about 15GB table:
> CPU utilization : 0-55% avg core
> Took 0:18:8 @ 14.11 MB/sec
> Hot functions: heap_prune_chain, hash_search_with_hash_value,
heap_freeze_tuple
• Even with this a 1TB table could take about 20 hours
• Maybe help with some sort of pipelining reads through one process
while processing it with another
29. CREATE INDEX Performance
• Dropping Index takes about 10 seconds
• However index creation is much longer
> Depending on type of columns, the backend can process
about 18MB/sec before its limited by core performance
> Hot functions are btint8cmp (in this case) 50%,
dumptuples (25%), comparetup_index (9.1)%,
timestamp_cmp(3%)
• In this particular index it was index on an id and a
timestamp field.
• On a table that takes about 105 second to do a full
sequential scan, it takes about 1078 seconds to create an
index (10X)
30. Summary/Next Step
• Propose more projects in making DBA utilities multi-process
capable to spread up the work (eg VACUUM)
• Propose a separate background reader for sequential scans
so that it can do processing more efficiently without blocking
for read
• Propose re-thinking INDEX in multi-terabyte world
31. More Acknowledgements
• Greg Smith, Truviso - Guidance on PGBench
• Masood Mortazavi – PostgreSQL Manager @ Sun
• PostgreSQL Team @ Sun
32. More Information
• Blogs on PostgreSQL
> Josh Berkus: http://blogs.ittoolbox.com/database/soup
> Jignesh Shah: http://blogs.sun.com/jkshah/
> Paul van den Bogaard: http://blogs.sun.com/paulvandenbogaard/
> Robert Lor: http://blogs.sun.com/robertlor/
> Tom Daly: http://blogs.sun.com/tomdaly/
• PostgreSQL on Solaris Wiki:
> http://wikis.sun.com/display/DBonSolaris/PostgreSQL
• PostgreSQL Questions:
> postgresql-questions@sun.com
> databases-discuss@opensolaris.org
35. TPC-E Characteristics
• Brokerage House workload
• Scale factor in terms of active customers to be used
dependent on target performance (roughly Every 1K
customer = 7.1GB raw data to be loaded)
• Lots of Constraints and Foreign keys
• Business logic (part of system) can be implemented
via Stored Procedures or other mechanisms
• Can be used to stress multiple features of database:
Random IO reads/writes, Index performance, stored
procedure performance, response times, etc
36. TPC-E Highlights
● Complex schema
● Referential Integrity
● Less partitionable
● Increase # of trans
● Transaction Frames
● Non-primary key access
to data
● Data access
requirements (RAID)
● Complex transaction
queries
● Extensive foreign key
relationships
● TPC provided core
components