NewSQL overview:
- History of RDBMs
- The reasons why NoSQL concept appeared
- Why NoSQL was not enough, the necessity of NewSQL
- Characteristics of NewSQL
- 7 DBs that belongs to NewSQL
- Overview Table with main properties
Big Data, NoSQL, NewSQL & The Future of Data ManagementTony Bain
It is an exciting and interesting time to be involved in data. More change of influence has occurred in the database management in the last 18 months than has occurred in the last 18 years. New technologies such as NoSQL & Hadoop and radical redesigns of existing technologies, like NewSQL , will change dramatically how we manage data moving forward.
These technologies bring with them possibilities both in terms of the scale of data retained but also in how this data can be utilized as an information asset. The ability to leverage Big Data to drive deep insights will become a key competitive advantage for many organisations in the future.
Join Tony Bain as he takes us through both the high level drivers for the changes in technology, how these are relevant to the enterprise and an overview of the possibilities a Big Data strategy can start to unlock.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
All about Big Data components and the best tools to ingest, process, store and visualize the data.
This is a keynote from the series "by Developer for Developers" powered by eSolutionsGrup.
With the expansion of big data and analytics, organizations are looking to incorporate data streaming into their business processes to make real-time decisions.
Join this webinar as we guide you through the buzz around data streams:
- Market trends in stream processing
- What is stream processing
- How does stream processing compare to traditional batch processing
- High and low volume streams
- The possibilities of working with data streaming and the benefits it provides to organizations
- The importance of spatial data in streams
Big Data, NoSQL, NewSQL & The Future of Data ManagementTony Bain
It is an exciting and interesting time to be involved in data. More change of influence has occurred in the database management in the last 18 months than has occurred in the last 18 years. New technologies such as NoSQL & Hadoop and radical redesigns of existing technologies, like NewSQL , will change dramatically how we manage data moving forward.
These technologies bring with them possibilities both in terms of the scale of data retained but also in how this data can be utilized as an information asset. The ability to leverage Big Data to drive deep insights will become a key competitive advantage for many organisations in the future.
Join Tony Bain as he takes us through both the high level drivers for the changes in technology, how these are relevant to the enterprise and an overview of the possibilities a Big Data strategy can start to unlock.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
All about Big Data components and the best tools to ingest, process, store and visualize the data.
This is a keynote from the series "by Developer for Developers" powered by eSolutionsGrup.
With the expansion of big data and analytics, organizations are looking to incorporate data streaming into their business processes to make real-time decisions.
Join this webinar as we guide you through the buzz around data streams:
- Market trends in stream processing
- What is stream processing
- How does stream processing compare to traditional batch processing
- High and low volume streams
- The possibilities of working with data streaming and the benefits it provides to organizations
- The importance of spatial data in streams
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me for other informations and to download
HBase Vs Cassandra Vs MongoDB - Choosing the right NoSQL databaseEdureka!
NoSQL includes a wide range of different database technologies and were developed as a result of surging volume of data stored. Relational databases are not capable of coping with this huge volume and faces agility challenges. This is where NoSQL databases have come in to play and are popular because of their features. The session covers the following topics to help you choose the right NoSQL databases:
Traditional databases
Challenges with traditional databases
CAP Theorem
NoSQL to the rescue
A BASE system
Choose the right NoSQL database
This presentation discusses the follow topics
What is Hadoop?
Need for Hadoop
History of Hadoop
Hadoop Overview
Advantages and Disadvantages of Hadoop
Hadoop Distributed File System
Comparing: RDBMS vs. Hadoop
Advantages and Disadvantages of HDFS
Hadoop frameworks
Modules of Hadoop frameworks
Features of 'Hadoop‘
Hadoop Analytics Tools
Apache Sqoop efficiently transfers bulk data between Apache Hadoop and structured datastores such as relational databases. Sqoop helps offload certain tasks (such as ETL processing) from the EDW to Hadoop for efficient execution at a much lower cost. Sqoop can also be used to extract data from Hadoop and export it into external structured datastores. Sqoop works with relational databases such as Teradata, Netezza, Oracle, MySQL, Postgres, and HSQLDB
Apache Hadoop is a framework for distributed computation and storage of very large data sets on computer clusters. Hadoop began as a project to implement Google’s MapReduce programming model and has become synonymous with a rich ecosystem of related technologies, not limited to Apache Pig, Apache Hive, Apache Spark, Apache HBase, and others
Apache HBase™ is the Hadoop database, a distributed, salable, big data store.Its a column-oriented database management system that runs on top of HDFS.
Apache HBase is an open source NoSQL database that provides real-time read/write access to those large data sets. ... HBase is natively integrated with Hadoop and works seamlessly alongside other data access engines through YARN.
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me for other informations and to download
HBase Vs Cassandra Vs MongoDB - Choosing the right NoSQL databaseEdureka!
NoSQL includes a wide range of different database technologies and were developed as a result of surging volume of data stored. Relational databases are not capable of coping with this huge volume and faces agility challenges. This is where NoSQL databases have come in to play and are popular because of their features. The session covers the following topics to help you choose the right NoSQL databases:
Traditional databases
Challenges with traditional databases
CAP Theorem
NoSQL to the rescue
A BASE system
Choose the right NoSQL database
This presentation discusses the follow topics
What is Hadoop?
Need for Hadoop
History of Hadoop
Hadoop Overview
Advantages and Disadvantages of Hadoop
Hadoop Distributed File System
Comparing: RDBMS vs. Hadoop
Advantages and Disadvantages of HDFS
Hadoop frameworks
Modules of Hadoop frameworks
Features of 'Hadoop‘
Hadoop Analytics Tools
Apache Sqoop efficiently transfers bulk data between Apache Hadoop and structured datastores such as relational databases. Sqoop helps offload certain tasks (such as ETL processing) from the EDW to Hadoop for efficient execution at a much lower cost. Sqoop can also be used to extract data from Hadoop and export it into external structured datastores. Sqoop works with relational databases such as Teradata, Netezza, Oracle, MySQL, Postgres, and HSQLDB
Apache Hadoop is a framework for distributed computation and storage of very large data sets on computer clusters. Hadoop began as a project to implement Google’s MapReduce programming model and has become synonymous with a rich ecosystem of related technologies, not limited to Apache Pig, Apache Hive, Apache Spark, Apache HBase, and others
Apache HBase™ is the Hadoop database, a distributed, salable, big data store.Its a column-oriented database management system that runs on top of HDFS.
Apache HBase is an open source NoSQL database that provides real-time read/write access to those large data sets. ... HBase is natively integrated with Hadoop and works seamlessly alongside other data access engines through YARN.
In this talk, we'll walk through RocksDB technology and look into areas where MyRocks is a good fit by comparison to other engines such as InnoDB. We will go over internals, benchmarks, and tuning of MyRocks engine. We also aim to explore the benefits of using MyRocks within the MySQL ecosystem. Attendees will be able to conclude with the latest development of tools and integration within MySQL.
Large-scale projects development (scaling LAMP)Alexey Rybak
This 8-hours tutorial was given at various conferences including Percona conference (London), DevConf (Moscow), Highload++ (Moscow).
ABSTRACT
During this tutorial we will cover various topics related to high scalability for the LAMP stack. This workshop is divided into three sections.
The first section covers basic principles of shared nothing architectures and horizontal scaling for the app//cache/database tiers.
Section two of this tutorial is devoted to MySQL sharding techniques, queues and a few performance-related tips and tricks.
In section three we will cover the practical approach for measuring site performance and quality, porviding a "lean" support philosophy, connecting buesiness and technology metrics.
In addition we will cover a very useful Pinba real-time statistical server, it's features and various use cases. All of the sections will be based on real-world examples built in Badoo, one of the biggest dating sites on the Internet.
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
on-Volatile-Memory express (NVMe) standard promises and order of magnitude faster storage than regular SSDs, while at the same time being more economical than regular RAM on TB/$. This talk evaluates the use cases and benefits of NVMe drives for its use in Big Data clusters with HBase and Hadoop HDFS.
First, we benchmark the different drives using system level tools (FIO) to get maximum expected values for each different device type and set expectations. Second, we explore the different options and use cases of HBase storage and benchmark the different setups. And finally, we evaluate the speedups obtained by the NVMe technology for the different Big Data use cases from the YCSB benchmark.
In summary, while the NVMe drives show up to 8x speedup in best case scenarios, testing the cost-efficiency of new device technologies is not straightforward in Big Data, where we need to overcome system level caching to measure the maximum benefits.
LuSql: (Quickly and easily) Getting your data from your DBMS into Luceneeby
Need to move your data from your DBMS to Lucene? The recently released LuSql allows you to do this in a single line. LuSql is a high performance, low use barrier application for getting DBMS data into Lucene. This presentation will introduce LuSql, and give a brief tutorial on simple to crazy complicated use cases, including per document sub-queries and out-of-band document transformations. -- Glen Newton, CISTI, National Research Council
Database as a Service on the Oracle Database Appliance PlatformMaris Elsins
Speaker: Marc Fielding, Co-speaker: Maris Elsins.
Oracle Database Appliance provides a robust, highly-available, cost-effective, and surprisingly scalable platform for database as a service environment. By leveraging Oracle Enterprise Manager's self-service features, databases can be provisioned on a self-service basis to a cluster of Oracle Database Appliance machines. Discover how multiple ODA devices can be managed together to provide both high availability and incremental, cost-effective scalability. Hear real-world lessons learned from successful database consolidation implementations.
Edge performance with in memory nosql, see how you can add high performance and scalability to your application. And try out some of the possible solutions: memcached, redis and aerospike
OpenStack Days East -- MySQL Options in OpenStackMatt Lord
In most production OpenStack installations, you want the backing metadata store to be highly available. For this, the de facto standard has become MySQL+Galera. In order to help you meet this basic use case even better, I will introduce you to the brand new native MySQL HA solution called MySQL Group Replication. This allows you to easily go from a single instance of MySQL to a MySQL service that's natively distributed and highly available, while eliminating the need for any third party library and implementations.
If you have an extremely large OpenStack installation in production, then you are likely to eventually run into write scaling issues and the metadata store itself can become a bottleneck. For this use case, MySQL NDB Cluster can allow you to linearly scale the metadata store as your needs grow. I will introduce you to the core features of MySQL NDB Cluster--which include in-memory OLTP, transparent sharding, and support for active/active multi-datacenter clusters--that will allow you to meet even the most demanding of use cases with ease.
What is Distributed Tracing (DT), why it may be useful for you.
Design of DT, how OpenTracing and OpenCensus works for Elixir/Erlang projects (libraries, problems, my experience)
Kubernetes is not needed to 90 percents of the companies.rusIvan Glushkov
Эйфория и хайп вокруг Kubernetes не дают всем компаниям возможности трезво взглянуть на сложности, проблемы и риски процесса перехода на Kubernetes.
Я попробую остудить пыл самых смелых и наивных, и показать, что этот путь очень тернист и опасен:
рассмотрю список стандартных проблем,
покажу, на что следует обратить внимание при планировании перехода,
и посоветую "ничего не трогать, пока работает".
Стандартно, банально, но может сэкономить вам массу нервов и денег.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
6. Startups lifecycle
❖ Start: no money, no users, open source
❖ Middle: more users, storage optimization
❖ Final: plenty of users, storage failure
Users Errors
7. New requirements
❖ Large scale systems, with huge and growing data sets
❖ 9M messages per hour in Facebook
❖ 50M messages per day in Twitter
❖ Information is frequently generated by devices
❖ High concurrency requirements
❖ Usually, data model with some relations
❖ Often, transactional integrity
8. Trends: architecture change
Client Side
Server Side
Cloud Storage
Client Side
Server Side
Database
Consistency, transactions: Database
Storage optimization: Database
Scalability: Client Side
Consistency, transactions: Cloud
Storage optimization: Cloud
Scalability: All levels
10. Trends: architecture change
❖ ‘P’ in CAP is not discrete
❖ Managing partitions: detection, limitations in
operations, recovery
11. NoSQL
❖ CAP: first ‘A’, then ‘C’: finer control over availability
❖ Horizontal scaling
❖ Not a “relational model”, custom API
❖ Schemaless
❖ Types: Key-Value, Document, Graph, …
12. Application-level sharding
❖ Additional application-level logic
❖ Difficulties with cross-sharding transactions
❖ More servers to maintain
❖ More components — higher prob for breakdown
13. NewSQL: definition
“A DBMS that delivers the scalability
and flexibility promised by NoSQL
while retaining the support for SQL
queries and/or ACID, or to improve
performance for appropriate workloads.”
451 Group
14. NewSQL: definition
❖ SQL as the primary interface
❖ ACID support for transactions
❖ Non-locking concurrency control
❖ High per-node performance
❖ Scalable, shared nothing architecture
Michael Stonebraker
15. Shared nothing architecture
❖ No single point of failure
❖ Each node is independent and self-sufficient
❖ No shared memory or disk
❖ Scale infinitely
❖ Data partitioning
❖ Slow multi-shards requests
16. Column-oriented DBMS
❖ Store content by column rather than by row
❖ Efficient in hard disk access
❖ Good for sparse and repeated data
❖ Higher data compression
❖ More reads/writes for large records with a lot of fields
❖ Better for relatively infrequent writes, lots of data throughput on reads
(OLAP, analytic requests).
John Smith 20
Joe Smith 30
Alice Adams 50
John:001; Joe:002; Alice:003.
Smith:001,002; Adams:003.
20:001; 30:002; 50:003.
17. Traditional DBMS overheads
12%
10%
11%
18% 20%
29%Buffer Management
Logging
Locking
Index management
Latching
Useful work
“Removing those overheads and running the database in
main memory would yield orders of magnitude improvements
in database performance”
by Stonebraker & research group
18. In-memory storage
❖ High throughput
❖ Low latency
❖ No Buffer Management
❖ If serialized, no Locking or Latching
19. In-memory storage: price
on-demand 3Y-reserved plan
per hour 11.2 $ 3.9 $
per month 8.1K $ 2.8K $
per year 97K $ 33,7K $
Amazon price reduction
Current price for 1TB (~4 instances of ‘r3.8xlarge’ type)
20. NewSQL: categories
❖ New approaches: VoltDB, Clustrix, NuoDB
❖ New storage engines: TokuDB, ScaleDB
❖ Transparent clustering: ScaleBase, dbShards
22. NuoDB
❖ Everything is an ‘Atom’
❖ Peer-to-peer communication, encrypted sessions
❖ MVCC + Append-only storage
23. NuoDB: CAP & ACID
❖ `CP` system. Need majority of nodes to work
❖ If split to two equal parts -> stop
❖ Several consistency modes including ‘consistent_read’
28. VoltDB: CAP & ACID
❖ Without K-safety, any node fail break the whole DB
❖ Snapshot and shutdown minor segments during
network paritions
❖ Single-partition transactions are very fast
❖ Multi-partition transactions are slower (manager), try to
avoid (1000s tps in ’13, no updates since)
31. ❖ Multi-master
❖ Shared data
❖ Cluster manager to solve
conflicts (locks)
❖ ACID?
❖ Network Partition Handling?
❖ Scaling?
ScaleDB
MySQL MySQL MySQ
…
Mirrored
Storage
…
Application
Cluster
Manager
Mirrored
Storage
Mirrored
Storage
32. ClustrixDB
❖ “Query fragment” - basic primitive of the system:
❖ read/write/ execute function
❖ modify control flow
❖ perform synchronisation
❖ send rows to query fragments on another nodes
❖ Data partitions: “slices” split and moved transparently
❖ Replication: master slice for reads + slave for redundancy
34. ClustrixDB: CAP & ACID
❖ `CP` system. Need majority of nodes to work
❖ Only ‘Repeatable Read’ isolation level
(so, ‘fantom reads’ are possible)
❖ Distributed Lock Manager for writer-writer locks (on
each node)
35. TPC-C
❖ Online Transaction Processing
(OLTP) benchmark
❖ 9 types of tables
❖ 5 concurrent transactions of different complexity
❖ Productivity measured in “new-order transaction”
36. ClustrixDB: TPC-C
❖ 5000W ~ 400GB of data
❖ Compared with Percona
Mysql, Intel Xeon, 8 cores
❖ ClustrixDB nodes: “Dual 4
core Westmere processors”
37. ClustrixDB: example
❖ 30M users, 10M logins per day
❖ 4.4B transactions per day
❖ 1.08/4.69 Petabytes per month writes/reads
❖ 42 nodes, 336 cores, 2TB memory, 46TB SSD
38. FoundationDB
❖ KV store, ordered keys
❖ Paxos for cluster coordination
❖ Global ACID transactions, range operations
❖ Lock-free, optimistic concurrency, MVCC
❖ Good testing (deterministic simulation)
❖ Fault-tolerance (replication)
❖ SQL Layer (similar to Google F1 on top of Spanner)
39. FoundationDB
❖ SSD/Memory storage engine
❖ Layers concept
❖ ‘CP’ system with Paxos-ed
coordination centres
❖ Written in the Flow language (translated to C++11)
with actor model support
❖ Watches, atomic operations (e.g. ‘add’)
40. FoundationDB: CAP and ACID
❖ Serializable isolation with optimistic concurrency
❖ > 100 wps to the same key? Use another DB!
❖ ‘CP system’ (Paxos)
Need majority of coordination center to work
42. FoundationDB:SQL Layer
❖ SQL - layer on top of KV ->
transactional, scalable, HA
❖ SQL Layer is stateless ->
scalable, fault tolerant
❖ Hierarchical schema
❖ SQL and JSON interfaces
❖ Powerful indexing (multi-table, geospatial, …)
43. FoundationDB: SQL Performance
Sysbench: read/write, ~80GB, 300M rows
One node test
4 core, 16GB RAM, 200GB SATA SSD
Multi nodes test
KV: 8 nodes with 1-process; 3-replication
SQL: up to 32 nodes with
8-thread sysbench process
44. MemSQL
❖ In-Memory Storage for OLTP
❖ Column-oriented Storage for OLAP
❖ Compiled Query Execution Plans (+cache)
❖ Local ACID transactions (no global txs for distributed)
❖ Lock-free, MVCC
❖ Fault tolerance, automatic replication,
redundancy (=2 by default)
❖ [Almost] no penalty for replica creation
45. MemSQL
❖ Two-tiered shared-nothing architecture
• Aggregators for query routing
• Leaves for storage and processing
❖ Integration:
• SQL
• MySQL protocol
• JSON API
46. MemSQL: CAP & ACID
❖ `CP` system. Need majority of nodes (or half with
master) to work
❖ Only ‘Read Committed’ isolation level
(‘fantom reads’, ‘non-repeatable reads’ are possible)
❖ Manual Master Aggregator management
48. Overview
Max
Isolation
Scalable
Open
Source
Free to try Language
PostgreSQL S Postgres-XL? Yes Yes C
NuoDB CR Yes No <5 domains C++
VoltDB S Yes Yes Yes
(wo HA)
Java/C++
ScaleDB RC? Yes? No ? ?
ClustrixDB RR Yes No Trial
(via email req)
C ?
FoundationDB S Yes Partly <6 processes Flow(C++)
MemSQL RC Yes No ? C++
S: Serializable, RR: Read Committed, RC: Read Committed, CR: Consistent Read
49. Conclusions
❖ NewSQL is an established trend with a number of
options
❖ Hard to pick one because they're not on a common scale
❖ No silver bullet
❖ Growing data volume requires ever more efficient ways
to store and process it
51. Links: General concepts
❖ CAP explanation from Brewer, 12 years later
❖ Scalable performance, simple explanation
❖ What is NewSQL
❖ Overview about NoSQL databases
❖ Performance loss in OLTP systems
❖ Memory price trends
❖ (wiki) Shared Nothing Architecture
❖ (wiki) Column oriented DBMS
❖ How NewSQL handles big data
❖ What is YCSB benchmark
❖ What is TPC benchmark
❖ Transactional isolation levels
52. Links: NuoDB
❖ http://www.infoq.com/articles/nuodb-architecture-1/
❖ http://www.infoq.com/articles/nuodb-architecture-2/
❖ http://stackoverflow.com/questions/14552091/nuodb-and-hdfs-as-
storage
❖ http://go.nuodb.com/rs/nuodb/images/NuoDB_Benchmark_Report.pdf
❖ NuoDB white paper (google has you :)
❖ https://aphyr.com/posts/292-call-me-maybe-nuodb
❖ http://dev.nuodb.com/techblog/failure-detection-and-network-partition-
management-nuodb