Datastores

1,260 views
1,136 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,260
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
14
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Datastores

  1. 1. Scalable SQL and NoSQL Data Stores Rick Cattell Originally published in 2010, last revised December 2011ABSTRACT 3. a simple call level interface or protocol (in In this paper, we examine a number of SQL and so- contrast to a SQL binding),called “NoSQL” data stores designed to scale simple 4. a weaker concurrency model than the ACIDOLTP-style application loads over many servers. transactions of most relational (SQL) databaseOriginally motivated by Web 2.0 applications, these systems,systems are designed to scale to thousands or millions 5. efficient use of distributed indexes and RAM forof users doing updates as well as reads, in contrast to data storage, andtraditional DBMSs and data warehouses. We contrastthe new systems on their data model, consistency 6. the ability to dynamically add new attributes tomechanisms, storage mechanisms, durability data records.guarantees, availability, query support, and other The systems differ in other ways, and in this paper wedimensions. These systems typically sacrifice some of contrast those differences. They range in functionalitythese dimensions, e.g. database-wide transaction from the simplest distributed hashing, as supported byconsistency, in order to achieve others, e.g. higher the popular memcached open source cache, to highlyavailability and scalability. scalable partitioned tables, as supported by Google’sNote: Bibliographic references for systems are not BigTable [1]. In fact, BigTable, memcached, andlisted, but URLs for more information can be found in Amazon’s Dynamo [2] provided a “proof of concept”the System References table at the end of this paper. that inspired many of the data stores we describe here: • Memcached demonstrated that in-memory indexesCaveat: Statements in this paper are based on sources can be highly scalable, distributing and replicatingand documentation that may not be reliable, and the objects over multiple nodes.systems described are “moving targets,” so somestatements may be incorrect. Verify through other • Dynamo pioneered the idea of eventualsources before depending on information here. consistency as a way to achieve higher availabilityNevertheless, we hope this comprehensive survey is and scalability: data fetched are not guaranteed touseful! Check for future corrections on the author’s be up-to-date, but updates are guaranteed to beweb site cattell.net/datastores. propagated to all nodes eventually.Disclosure: The author is on the technical advisory • BigTable demonstrated that persistent recordboard of Schooner Technologies and has a consulting storage could be scaled to thousands of nodes, abusiness advising on scalable databases. feat that most of the other systems aspire to. A key feature of NoSQL systems is “shared nothing”1. OVERVIEW horizontal scaling – replicating and partitioning dataIn recent years a number of new systems have been over many servers. This allows them to support a largedesigned to provide good horizontal scalability for number of simple read/write operations per second.simple read/write database operations distributed over This simple operation load is traditionally called OLTPmany servers. In contrast, traditional database (online transaction processing), but it is also commonproducts have comparatively little or no ability to scale in modern web applicationshorizontally on these applications. This paper The NoSQL systems described here generally do notexamines and compares the various new systems. provide ACID transactional properties: updates areMany of the new systems are referred to as “NoSQL” eventually propagated, but there are limited guaranteesdata stores. The definition of NoSQL, which stands on the consistency of reads. Some authors suggest afor “Not Only SQL” or “Not Relational”, is not “BASE” acronym in contrast to the “ACID” acronym:entirely agreed upon. For the purposes of this paper, • BASE = Basically Available, Soft state,NoSQL systems generally have six key features: Eventually consistent 1. the ability to horizontally scale “simple • ACID = Atomicity, Consistency, Isolation, and operation” throughput over many servers, Durability 2. the ability to replicate and to distribute (partition) The idea is that by giving up ACID constraints, one data over many servers, can achieve much higher performance and scalability.
  2. 2. However, the systems differ in how much they give up. records, online dating records, classified ads, and manyFor example, most of the systems call themselves other kinds of data. These all generally fit the“eventually consistent”, meaning that updates are definition of “simple operation” applications: readingeventually propagated to all nodes, but many of them or writing a small number of related records in eachprovide mechanisms for some degree of consistency, operation.such as multi-version concurrency control (MVCC). The term “horizontal scalability” means the ability toProponents of NoSQL often cite Eric Brewer’s CAP distribute both the data and the load of these simpletheorem [4], which states that a system can have only operations over many servers, with no RAM or disktwo out of three of the following properties: shared among the servers. Horizontal scaling differsconsistency, availability, and partition-tolerance. The from “vertical” scaling, where a database systemNoSQL systems generally give up consistency. utilizes many cores and/or CPUs that share RAM andHowever, the trade-offs are complex, as we will see. disks. Some of the systems we describe provide bothNew relational DBMSs have also been introduced to vertical and horizontal scalability, and the effective useprovide better horizontal scaling for OLTP, when of multiple cores is important, but our main focus is oncompared to traditional RDBMSs. After examining horizontal scalability, because the number of cores thatthe NoSQL systems, we will look at these SQL can share memory is limited, and horizontal scalingsystems and compare the strengths of the approaches. generally proves less expensive, using commodityThe SQL systems strive to provide horizontal servers. Note that horizontal and vertical partitioningscalability without abandoning SQL and ACID are not related to horizontal and vertical scaling,transactions. We will discuss the trade-offs here. except that they are both useful for horizontal scaling.In this paper, we will refer to both the new SQL andNoSQL systems as data stores, since the term 1.2 Systems Beyond our Scope“database system” is widely used to refer to traditional Some authors have used a broad definition of NoSQL,DBMSs. However, we will still use the term including any database system that is not relational.“database” to refer to the stored data in these systems. Specifically, they include:All of the data stores have some administrative unit • Graph database systems: Neo4j and OrientDBthat you would call a database: data may be stored in provide efficient distributed storage and queries ofone file, or in a directory, or via some other a graph of nodes with references among them.mechanism that defines the scope of data used by a • Object-oriented database systems: Object-orientedgroup of applications. Each database is an island unto DBMSs (e.g., Versant) also provide efficientitself, even if the database is partitioned and distributed distributed storage of a graph of objects, andover multiple machines: there is no “federated materialize these objects as programmingdatabase” concept in these systems (as with some language objects.relational and object-oriented databases), allowing • Distributed object-oriented stores: Very similar tomultiple separately-administered databases to appear object-oriented DBMSs, systems such as GemFireas one. Most of the systems allow horizontal distribute object graphs in-memory on multiplepartitioning of data, storing records on different servers servers.according to some key; this is called “sharding”. Someof the systems also allow vertical partitioning, where These systems are a good choice for applications thatparts of a single record are stored on different servers. must do fast and extensive reference-following, especially where data fits in memory. Programming1.1 Scope of this Paper language integration is also valuable. Unlike theBefore proceeding, some clarification is needed in NoSQL systems, these systems generally providedefining “horizontal scalability” and “simple ACID transactions. Many of them provide horizontaloperations”. These define the focus of this paper. scaling for reference-following and distributed query decomposition, as well. Due to space limitations,By “simple operations”, we refer to key lookups, reads however, we have omitted these systems from ourand writes of one record or a small number of records. comparisons. The applications and the necessaryThis is in contrast to complex queries or joins, read- optimizations for scaling for these systems differ frommostly access, or other application loads. With the the systems we cover here, where key lookups andadvent of the web, especially Web 2.0 sites where simple operations predominate over reference-millions of users may both read and write data, following and complex object behavior. It is possiblescalability for simple database operations has become these systems can scale on simple operations as well,more important. For example, applications may search but that is a topic for a future paper, and proof throughand update multi-server databases of electronic mail, benchmarks.personal profiles, web postings, wikis, customer
  3. 3. Data warehousing database systems provide horizontal • Document Stores: These systems store documents,scaling, but are also beyond the scope of this paper. as just defined. The documents are indexed and aData warehousing applications are different in simple query mechanism is provided.important ways: • Extensible Record Stores: These systems store • They perform complex queries that collect and extensible records that can be partitioned join information from many different tables. vertically and horizontally across nodes. Some • The ratio of reads to writes is high: that is, the papers call these “wide column stores”. database is read-only or read-mostly. • Relational Databases: These systems store (andThere are existing systems for data warehousing that index and query) tuples. The new RDBMSs thatscale well horizontally. Because the data is provide horizontal scaling are covered in thisinfrequently updated, it is possible to organize or paper.replicate the database in ways that make scalingpossible. Data stores in these four categories are covered in the next four sections, respectively. We will then1.3 Data Model Terminology summarize and compare the systems.Unlike relational (SQL) DBMSs, the terminology usedby NoSQL data stores is often inconsistent. For the 2. KEY-VALUE STORESpurposes of this paper, we need a consistent way to The simplest data stores use a data model similar to thecompare the data models and functionality. popular memcached distributed in-memory cache, withAll of the systems described here provide a way to a single key-value index for all the data. We’ll callstore scalar values, like numbers and strings, as well as these systems key-value stores. Unlike memcached,BLOBs. Some of them also provide a way to store these systems generally provide a persistencemore complex nested or reference values. The systems mechanism and additional functionality as well:all store sets of attribute-value pairs, but use different replication, versioning, locking, transactions, sorting,data structures, specifically: and/or other features. The client interface provides • A “tuple” is a row in a relational table, where inserts, deletes, and index lookups. Like memcached, attribute names are pre-defined in a schema, and none of these systems offer secondary indices or keys. the values must be scalar. The values are 2.1 Project Voldemort referenced by attribute name, as opposed to an Project Voldemort is an advanced key-value store, array or list, where they are referenced by ordinal written in Java. It is open source, with substantial position. contributions from LinkedIn. Voldemort provides • A “document” allows values to be nested multi-version concurrency control (MVCC) for documents or lists as well as scalar values, and the updates. It updates replicas asynchronously, so it does attribute names are dynamically defined for each not guarantee consistent data. However, it can document at runtime. A document differs from a guarantee an up-to-date view if you read a majority of tuple in that the attributes are not defined in a replicas. global schema, and this wider range of values are Voldemort supports optimistic locking for consistent permitted. multi-record updates: if updates conflict with any other • An “extensible record” is a hybrid between a tuple process, they can be backed out. Vector clocks, as and a document, where families of attributes are used in Dynamo [3], provide an ordering on versions. defined in a schema, but new attributes can be You can also specify which version you want to added (within an attribute family) on a per-record update, for the put and delete operations. basis. Attributes may be list-valued. Voldemort supports automatic sharding of data. • An “object” is analogous to an object in Consistent hashing is used to distribute data around a programming languages, but without the ring of nodes: data hashed to node K is replicated on procedural methods. Values may be references or node K+1 … K+n where n is the desired number of nested objects. extra copies (often n=1). Using good sharding technique, there should be many more “virtual” nodes1.4 Data Store Categories than physical nodes (servers). Once data partitioningIn this paper, the data stores are grouped according to is set up, its operation is transparent. Nodes can betheir data model: added or removed from a database cluster, and the • Key-value Stores: These systems store values and system adapts automatically. Voldemort automatically an index to find them, based on a programmer- detects and recovers failed nodes. defined key.
  4. 4. Voldemort can store data in RAM, but it also permits tables, or in Osmos tables. ETS, DETS, and Osmosplugging in a storage engine. In particular, it supports tables are all implemented in Erlang, with differenta Berkeley DB and Random Access File storage performance and properties.engine. Voldemort supports lists and records in One unique feature of Riak is that it can store “links”addition to simple scalar values. between objects (documents), for example to link objects for authors to the objects for the books they2.2 Riak wrote. Links reduce the need for secondary indices,Riak is written in Erlang. It was open-sourced by but there is still no way to do range queries.Basho in mid-2009. Basho alternately describes Riak Here’s an example of a Riak object described in JSON:as a “key-value store” and “document store”. We willcategorize it as an advanced key-value store here, {because it lacks important features of document stores, "bucket":"customers", "key":"12345",but it (and Voldemort) have more functionality than "object":{the other key-value stores: "name":"Mr. Smith", • Riak objects can be fetched and stored in JSON "phone":”415-555-6524” } format, and thus can have multiple fields (like "links":[ documents), and objects can be grouped into ["sales","Mr. Salesguy","salesrep"], buckets, like the collections supported by ["cust-orders","12345","orders"] ] "vclock":"opaque-riak-vclock", document stores, with allowed/required fields "lastmod":"Mon, 03 Aug 2009 18:49:42 GMT" defined on a per-bucket basis. } • Riak does not support indices on any fields except Note that the primary key is distinguished, while other the primary key. The only thing you can do with fields are part of an “object” portion. Also note that the non-primary fields is fetch and store them as the bucket, vector clock, and modification date is part of a JSON object. Riak lacks the query specified as part of the object, and links to other mechanisms of the document stores; the only objects are supported. lookup you can do is on primary key.Riak supports replication of objects and sharding by 2.3 Redishashing on the primary key. It allows replica values to The Redis key-value data store started as a one-personbe temporarily inconsistent. Consistency is tunable by project but now has multiple contributors as BSD-specifying how many replicas (on different nodes) licensed open source. It is written in C.must respond for a successful read and how many must A Redis server is accessed by a wire protocolrespond for a successful write. This is per-read and implemented in various client libraries (which must beper-write, so different parts of an application can updated when the protocol changes). The client sidechoose different trade-offs. does the distributed hashing over servers. The serversLike Voldemort, Riak uses a derivative of MVCC store data in RAM, but data can be copied to disk forwhere vector clocks are assigned when values are backup or system shutdown. System shutdown may beupdated. Vector clocks can be used to determine when needed to add more nodes.objects are direct descendents of each other or a Like the other key-value stores, Redis implementscommon parent, so Riak can often self-repair data that insert, delete and lookup operations. Like Voldemort,it discovers to be out of sync. it allows lists and sets to be associated with a key, notThe Riak architecture is symmetric and simple. Like just a blob or string. It also includes list and setVoldemort, it uses consistent hashing. There is no operations.distinguished node to track status of the system: the Redis does atomic updates by locking, and doesnodes use a gossip protocol to track who is alive and asynchronous replication. It is reported to supportwho has which data, and any node may service a client about 100K gets/sets per second on an 8-core server.request. Riak also includes a map/reduce mechanismto split work over all the nodes in a cluster. 2.4 ScalarisThe client interface to Riak is based on RESTful HTTP Scalaris is functionally similar to Redis. It was writtenrequests. REST (REpresentational State Transfer) uses in Erlang at the Zuse Institute in Berlin, and is openuniform, stateless, cacheable, client-server calls. There source. In distributing data over nodes, it allows keyis also a programmatic interface for Erlang, Java, and ranges to be assigned to nodes, rather than simplyother languages. hashing to nodes. This means that a query on a range of values does not need to go to every node, and it alsoThe storage part of Riak is “pluggable”: the key-value may allow better load balancing, depending on keypairs may be in memory, in ETS tables, in DETS distribution.
  5. 5. Like the other key-value stores, it supports insert, The Membase system is open source, and is supporteddelete, and lookup. It does replication synchronously by the company Membase. Its most attractive feature(copies must be updated before the operation is is probably its ability to elastically add or removecomplete) so data is guaranteed to be consistent. servers in a running system, moving data andScalaris also supports transactions with ACID dynamically redirecting requests in the meantime. Theproperties on multiple objects. Data is stored in elasticity in most of the other systems is not asmemory, but replication and recovery from node convenient.failures provides durability of the updates. Membrain is licensed per server, and is supported byNevertheless, a multi-node power failure would cause Schooner Technologies. Its most attractive feature isdisastrous loss of data, and the virtual memory limit probably its excellent tuning for flash memory. Thesets a maximum database size. performance gains of flash memory will not be gainedScalaris reads and writes must go to a majority of the in other systems by treating flash as a faster hard disk;replicas before an operation completes. Scalaris uses a it is important that the system treat flash as a truering of nodes, an unusual distribution and replication “third tier”, different from RAM and disk. Forstrategy that requires log(N) hops to read/write a key- example, many systems have substantial overhead invalue pair. buffering and caching hard disk pages; this is unnecessary overhead with flash. The benchmark2.5 Tokyo Cabinet results on Schooner’s web site show many times betterTokyo Cabinet / Tokyo Tyrant was a sourcefourge.net performance than a number of competitors, particularlyproject, but is now licensed and maintained by FAL when data overflows RAM.Labs. Tokyo Cabinet is the back-end server, TokyoTyrant is a client library for remote access. Both are 2.7 Summarywritten in C. All the key-value stores support insert, delete, andThere are six different variations for the Tokyo lookup operations. All of these systems provideCabinet server: hash indexes in memory or on disk, B- scalability through key distribution over nodes.trees in memory or on disk, fixed-size record tables, Voldemort, Riak, Tokyo Cabinet, and enhancedand variable-length record tables. The engines memcached systems can store data in RAM or on disk,obviously differ in their performance characteristics, with storage add-ons. The others store data in RAM,e.g. the fixed-length records allow quick lookups. and provide disk as backup, or rely on replication andThere are slight variations on the API supported by recovery so that a backup is not needed.these engines, but they all support common Scalaris and enhanced memcached systems useget/set/update operations. The documentation is a bit synchronous replication, the rest use asynchronous.unclear, but they claim to support locking, ACID Scalaris and Tokyo Cabinet implement transactions,transactions, a binary array data type, and more while the others do not.complex update operations to atomically update anumber or concatenate to a string. They support Voldemort and Riak use multi-version concurrencyasynchronous replication with dual master or control (MVCC), the others use locks.master/slave. Recovery of a failed node is manual, and Membrain and Membase are built on the popularthere is no automatic sharding. memcached system, adding persistence, replication, and other features. Backward compatibility with2.6 Memcached, Membrain, and memcached give these products an advantage.MembaseThe memcached open-source distributed in-memoryindexing system has been enhanced by Schooner 3. DOCUMENT STORESTehnologies and Membase, to include features As discussed in the first section, document storesanalogous to the other key-value stores: persistence, support more complex data than the key-value stores.replication, high availability, dynamic growth, backup, The term “document store” may be confusing: whileand so on. Without persistence or replication, these systems could store “documents” in thememcached does not really qualify as a “data store”. traditional sense (articles, Microsoft Word files, etc.), aHowever, Membrain and Membase certainly do, and document in these systems can be any kind ofthese systems are also compatible with existing “pointerless object”, consistent with our definition inmemcached applications. This compatibility is an Section 1. Unlike the key-value stores, these systemsattractive feature, given that memcached is widely generally support secondary indexes and multipleused; memcached users that require more advanced types of documents (objects) per database, and nestedfeatures can easily upgrade to Membase and documents or lists. Like other NoSQL systems, theMembrain.
  6. 6. document stores do not provide ACID transactional A CouchDB “collection” of documents is similar to aproperties. SimpleDB domain, but the CouchDB data model is richer. Collections comprise the only schema in3.1 SimpleDB CouchDB, and secondary indexes must be explicitlySimpleDB is part of Amazon’s proprietary cloud created on fields in collections. A document has fieldcomputing offering, along with their Elastic Compute values that can be scalar (text, numeric, or boolean) orCloud (EC2) and their Simple Storage Service (S3) on compound (a document or list).which SimpleDB is based. SimpleDB has been aroundsince 2007. As the name suggests, its model is simple: Queries are done with what CouchDB calls “views”,SimpleDB has Select, Delete, GetAttributes, and which are defined with Javascript to specify fieldPutAttributes operations on documents. SimpleDB is constraints. The indexes are B-trees, so the results ofsimpler than other document stores, as it does not queries can be ordered or value ranges. Queries can beallow nested documents. distributed in parallel over multiple nodes using a map- reduce mechanism. However, CouchDB’s viewLike most of the systems we discuss, SimpleDB mechanism puts more burden on programmers than asupports eventual consistency, not transactional declarative query language.consistency. Like most of the other systems, it doesasynchronous replication. Like SimpleDB, CouchDB achieves scalability through asynchronous replication, not throughUnlike key-value datastores, and like the other sharding. Reads can go to any server, if you don’t caredocument stores, SimpleDB supports more than one about having the latest values, and updates must begrouping in one database: documents are put into propagated to all the servers. However, a new projectdomains, which support multiple indexes. You can called CouchDB Lounge has been built to provideenumerate domains and their metadata. Select sharding on top of CouchDB, see:operations are on one domain, and specify aconjunction of constraints on attributes, basically in the http://code.google.com/p/couchdb-lounge/form: Like SimpleDB, CouchDB does not guarantee select <attributes> from <domain> where consistency. Unlike SimpleDB, each client does see a self-consistent view of the database, with repeatable <list of attribute value constraints> reads: CouchDB implements multi-versionDifferent domains may be stored on different Amazon concurrency control on individual documents, with anodes. Sequence ID that is automatically created for eachDomain indexes are automatically updated when any version of a document. CouchDB will notify andocument’s attributes are modified. It is unclear from application if someone else has updated the documentthe documentation whether SimpleDB automatically since it was fetched. The application can then try toselects which attributes to index, or if it indexes combine the updates, or can just retry its update andeverything. In either case, the user has no choice, and overwrite.the use of the indexes is automatic in SimpleDB query CouchDB also provides durability on system crash.processing. All updates (documents and indexes) are flushed toSimpleDB does not automatically partition data over disk on commit, by writing to the end of a file. (Thisservers. Some horizontal scaling can be achieve by means that periodic compaction is needed.) Byreading any of the replicas, if you don’t care about default, it flushes to disk after every document update.having the latest version. Writes do not scale, Together with the MVCC mechanism, CouchDB’showever, because they must go asynchronously to all durability thus provides ACID semantics at thecopies of a domain. If customers want better scaling, document level.they must do so manually by sharding themselves. Clients call CouchDB through a RESTful interface.SimpleDB is a “pay as you go” proprietary solution There are libraries for various languages (Java, C,from Amazon. There are currently built-in constraints, PHP, Python, LISP, etc) that convert native API callssome of which are quite limiting: a 10 GB maximum into the RESTful calls for you. CouchDB has somedomain size, a limit of 100 active domains, a 5 second basic database adminstration functionality as well.limit on queries, and so on. Amazon doesn’t licenseSimpleDB source or binary code to run on your own 3.3 MongoDBservers. SimpleDB does have the advantage of MongoDB is a GPL open source document storeAmazon support and documentation. written in C++ and supported by 10gen. It has some similarities to CouchDB: it provides indexes on3.2 CouchDB collections, it is lockless, and it provides a documentCouchDB has been an Apache project since early query mechanism. However, there are important2008. It is written in Erlang. differences:
  7. 7. • MongoDB supports automatic sharding, images and videos. These are stored in chunks that can distributing documents over servers. be streamed back to the client for efficient delivery. • Replication in MongoDB is mostly used for MongoDB supports master-slave replication with failover, not for (dirty read) scalability as in automatic failover and recovery. Replication (and CouchDB. MongoDB does not provide the global recovery) is done at the level of shards. Collections consistency of a traditional DBMS, but you can are automatically sharded via a user-defined shard key. get local consistency on the up-to-date primary Replication is asynchronous for higher performance, so copy of a document. some updates may be lost on a crash. • MongoDB supports dynamic queries with 3.4 Terrastore automatic use of indices, like RDBMSs. In Another recent document store is Terrastore, which is CouchDB, data is indexed and searched by writing built on the Terracotta distributed Java VM clustering map-reduce views. product. Like many of the other NoSQL systems, • CouchDB provides MVCC on documents, while client access to Terrastore is built on HTTP operations MongoDB provides atomic operations on fields. to fetch and store data. Java and Python client APIsAtomic operations on fields are provided as follows: have also been implemented. • The update command supports “modifiers” that Terrastore automatically partitions data over server facilitate atomic changes to individual values: $set nodes, and can automatically redistribute data when sets a value, $inc increments a value, $push servers are added or removed. Like MongoDB, it can appends a value to an array, $pushAll appends perform queries based on a predicate, including range several values to an array, $pull removes a value queries, and like CouchDB, it includes a map/reduce from an array, and $pullAll removes several mechanism for more advanced selection and values from an array. Since these updates aggregation of data. normally occur “in place”, they avoid the Like the other document databases, Terrastore is overhead of a return trip to the server. schema-less, and does not provide ACID transactions. • There is an “update if current” convention for Like MongoDB, it provides consistency on a per- changing a document only if field values match a document basis: a read will always fetch the latest given previous value. version of a document. • MongoDB supports a findAndModify command Terrastore supports replication and failover to a hot to perform an atomic update and immediately standby. return the updated document. This is useful for 3.5 Summary implementing queues and other data structures The document stores are schema-less, except for requiring atomicity. attributes (which are simply a name, and are not pre-MongoDB indices are explicitly defined using an specified), collections (which are simply a grouping ofensureIndex call, and any existing indices are documents), and the indexes defined on collectionsautomatically used for query processing. To find all (explicitly defined, except with SimpleDB). There areproducts released last year costing under $100 you some differences in their data models, e.g. SimpleDBcould write: does not allow nested documents. db.products.find( The document stores are very similar but use different {released: {$gte: new Date(2009, 1, 1,)}, terminology. For example, a SimpleDB Domain = price {‘$lte’: 100},}) CouchDB Database = MongoDB Collection =If indexes are defined on the queried fields, MongoDB Terrastore Bucket. SimpleDB calls documentswill automatically use them. MongoDB also supports “items”, and an attribute is a field in CouchDB, or amap-reduce, which allows for complex aggregations key in MongoDB or Terrastore.across documents. Unlike the key-value stores, the document storesMongoDB stores data in a binary JSON-like format provide a mechanism to query collections based oncalled BSON. BSON supports boolean, integer, float, multiple attribute value constraints. However,date, string and binary types. Client drivers encode the CouchDB does not support a non-procedural querylocal language’s document data structure (usually a language: it puts more work on the programmer anddictionary or associative array) into BSON and send it requires explicit utilization of indices.over a socket connection to the MongoDB server (in The document stores generally do not provide explicitcontrast to CouchDB, which sends JSON as text over locks, and have weaker concurrency and atomicityan HTTP REST interface). MongoDB also supports a properties than traditional ACID-compliant databases.GridFS specification for large binary objects, eg.
  8. 8. They differ in how much concurrency control they do Although most extensible record stores were patternedprovide. after BigTable, it appears that none of the extensibleDocuments can be distributed over nodes in all of the records stores come anywhere near to BigTable’ssystems, but scalability differs. All of the systems can scalability at present. BigTable is used for manyachieve scalability by reading (potentially) out-of-date purposes (think of the many services Google provides,replicas. MongoDB and Terrastore can obtain not just web search). It is worthwhile reading thescalability without that compromise, and can scale BigTable paper [1] for background on the challengeswrites as well, through automatic sharding and atomic with scaling.operations on documents. CouchDB might be able to 4.1 HBaseachieve this write-scalability with the help of the new HBase is an Apache project written in Java. It isCouchDB Lounge code. patterned directly after BigTable:A last-minute addendum as this paper goes to press: • HBase uses the Hadoop distributed file system inthe CouchDB and and Membase companies have now place of the Google file system. It puts updatesmerged, to form Couchbase. They plan to provide a into memory and periodically writes them out to“best of both” merge of their products, e.g. with files on the disk.CouchDB’s richer data model as well as the speed andelastic scalability of Membase. See Couchbase.com • The updates go to the end of a data file, to avoidfor more information. seeks. The files are periodically compacted. Updates also go to the end of a write ahead log, to perform recovery if a server crashes.4. EXTENSIBLE RECORD STORES • Row operations are atomic, with row-level lockingThe extensible record stores seem to have been and transactions. There is optional support formotivated by Google’s success with BigTable. Their transactions with wider scope. These usebasic data model is rows and columns, and their basic optimistic concurrency control, aborting if there isscalability model is splitting both rows and columns a conflict with other updates.over multiple nodes: • Partitioning and distribution are transparent; there • Rows are split across nodes through sharding on is no client-side hashing or fixed keyspace as in the primary key. They typically split by range some NoSQL systems. There is multiple master rather than a hash function. This means that support, to avoid a single point of failure. queries on ranges of values do not have to go to MapReduce support allows operations to be every node. distributed efficiently. • Columns of a table are distributed over multiple • HBase’s log-structured merge file indexes allow nodes by using “column groups”. These may seem fast range queries and sorting. like a new complexity, but column groups are • There is a Java API, a Thrift API, and REST API. simply a way for the customer to indicate which JDBC/ODBC support has recently been added. columns are best stored together. The initial prototype of HBase released in FebruaryAs noted earlier, these two partitionings (horizontal 2007. The support for transactions is attractive, andand vertical) can be used simultaneously on the same unusual for a NoSQL system.table. For example, if a customer table is partitionedinto three column groups (say, separating the customer 4.2 HyperTablename/address from financial and login information), HyperTable is written in C++. Its was open-sourcedthen each of the three column groups is treated as a by Zvents. It doesn’t seem to have taken off inseparate table for the purposes of sharding the rows by popularity yet, but Baidu became a project sponsor,customer ID: the column groups for one customer may that should help.or may not be on the same server. Hypertable is very similar to HBase and BigTable. ItThe column groups must be pre-defined with the uses column families that can have any number ofextensible record stores. However, that is not a big column “qualifiers”. It uses timestamps on data withconstraint, as new attributes can be defined at any time. MVCC. It requires an underyling distributed fileRows are analogous to documents: they can have a system such as Hadoop, and a distributed lockvariable number of attributes (fields), the attribute manager. Tables are replicated and partitioned overnames must be unique, rows are grouped into servers by key ranges. Updates are done in memorycollections (tables), and an individual row’s attributes and later flushed to disk.can be of any type. (However, note that CouchDB andMongoDB support nested objects, while the extensiblerecord stores generally support only scalar types.)
  9. 9. Hypertable supports a number of programming 4.4 Other Systemslanguage client interfaces. It uses a query language Yahoo’s PNUTs system also belongs in the “extensiblenamed HQL. record store” category. However, it is not reviewed in4.3 Cassandra this paper, as it is currently only used internally to Yahoo. We also have not reviewed BigTable,Cassandra is similar to the other extensible record although its functionality is available indirectlystores in its data model and basic functionality. It has through Google Apps. Both PNUTs and BigTable arecolumn groups, updates are cached in memory and included in the comparison table at the end of thisthen flushed to disk, and the disk representation is paper.periodically compacted. It does partitioning andreplication. Failure detection and recovery are fully 4.5 Summaryautomatic. However, Cassandra has a weaker The extensible record stores are mostly patterned afterconcurrency model than some other systems: there is BigTable. They are all similar, but differ inno locking mechanism, and replicas are updated concurrency mechanisms and other features.asynchronously. Cassandra focuses on “weak” concurrency (viaLike HBase, Cassandra is written in Java, and used MVCC) and HBase and HyperTable on “strong”under Apache licensing. It is supported by DataStax, consistency (via locks and logging).and was originally open sourced by Facebook in 2008.It was designed by a Facebook engineer and a Dynamoengineer, and is described as a marriage of Dynamo 5. SCALABLE RELATIONALand BigTable. Cassandra is used by Facebook as well SYSTEMSas other companies, so the code is reasonably mature. Unlike the other data stores, relational DBMSs have aClient interfaces are created using Facebook’s Thrift complete pre-defined schema, a SQL interface, andframework: ACID transactions. Traditionally, RDBMSs have not http://incubator.apache.org/thrift/ achieved the scalability of the some of the previously- described data stores. As of 5 years ago, MySQLCassandra automatically brings new available nodes Cluster appeared the most scalable, although notinto a cluster, uses the phi accrual algorithm to detect highly performant per node, compared to standardnode failure, and determines cluster membership in a MySQL.distributed fashion with a gossip-style algorithm. Recent developments are changing things. FurtherCassandra adds the concept of a “supercolumn” that performance improvements have been made toprovides another level of grouping within column MySQL Cluster, and several new products have comegroups. Databases (called keyspaces) contain column out, in particular VoltDB and Clustrix, that promise tofamilies. A column family contains either have good per-node performance as well assupercolumns or columns (not a mix of both). scalability. It appears likely that some relationalSupercolunns contain columns. As with the other DBMSs will provide scalability comparable withsystems, any row can have any combination of column NoSQL data stores, with two provisos:values (i.e., rows are variable length and are notconstrained by a table schema). • Use small-scope operations: As we’ve noted, operations that span many nodes, e.g. joins overCassandra uses an ordered hash index, which should many tables, will not scale well with sharding.give most of the benefit of both hash and B-treeindexes: you know which nodes could have a • Use small-scope transactions: Likewise,particular range of values instead of searching all transactions that span many nodes are going to benodes. However, sorting would still be slower than very inefficient, with the communication and two-with B-trees. phase commit overhead.Cassandra has reportedly scaled to about 150 machines Note that NoSQL systems avoid these two problemsin production at Facebook, perhaps more by now. by making it difficult or impossible to perform larger-Cassandra seems to be gaining a lot of momentum as scope operations and transactions. In contrast, aan open source project, as well. scalable RDBMS does not need to preclude larger- scope operations and transactions: they simplyFor applications where Cassandra’s eventual- penalize a customer for these operations if they useconsistency model is not adequate, “quorum reads” of them. Scalable RDBMSs thus have an advantage overa majority of replicas provide a way to get the latest the NoSQL data stores, because you have thedata. Cassandra writes are atomic within a column convenience of the higher-level SQL language andfamily. There is also some support for versioning and ACID properties, but you only pay a price for thoseconflict resolution.
  10. 10. when they span nodes. Scalable RDBMSs are than disk, and the overhead of a disk cache/buffertherefore included as a viable alternative in this paper. is eliminated as well. Performance will be very poor if virtual memory overflows RAM, but the5.1 MySQL Cluster gain with good RAM capacity planning isMySQL Cluster has been part of the MySQL release substantial.since 2004, and the code evolved from an even earlierproject from Ericsson. MySQL Cluster works by • SQL execution is single-threaded for each shard,replacing the InnoDB engine with a distributed layer using a shared-nothing architecture, so there is nocalled NDB. It is available from MySQL (now overhead for multi-thread latching.Oracle); it is open source. A proprietary MySQL • All SQL calls are made through stored procedures,Cluster Carrier Grade upgrade provides administrative with each stored procedure being one transaction.and automated management functionality. This means, if data is sharded to allowMySQL Cluster shards data over multiple database transactions to be executed on a single node, thenservers (a “shared nothing” architecture). Every shard no locks are required, and therefore no waits onis replicated, to support recovery. Bi-directional locks. Transaction coordination is likewisegeographic replication is also supported. avoided.MySQL Cluster supports in-memory as well as disk- • Stored procedures are compiled to produce codebased data. In-memory storage allows real-time comparable to the access level calls of NoSQLresponses. systems. They can be executed in the same order on a node and on replica node(s).Although MySQL Cluster seems to scale to morenodes than other RDBMSs to date, it reportedly runs VoltDB argues that these optimizations greatly reduceinto bottlenecks after a few dozen nodes. Work the number of nodes needed to support a givencontinues on MySQL Cluster, so this is likely to application load, with modest constraints on theimprove. database design. They have already reported some impressive benchmark results on their web site. Of5.2 VoltDB course, the highest performance requires that theVoltDB is a new open-source RDBMS designed for database working set fits in distributed RAM, perhapshigh performance (per node) as well as scalability. extended by SSDs. See [5] for some debate of theThe scalability and availability features are architectural issues on VoltDB and similar systems.competitive with MySQL Cluster and the NoSQL 5.3 Clustrixsystems in this paper: Clustrix offers a product with similarities to VoltDB • Tables are partitioned over multiple servers, and and MySQL Cluster, but Clustrix nodes are sold as clients can call any server. The distribution is rack-mounted appliances. They claim scalability to transparent to SQL users, but the customer can hundreds of nodes, with automatic sharding and choose the sharding attribute. replication (with a 4:1 read/write ratio, they report • Alternatively, selected tables can be replicated 350K TPS on 20 nodes and 160M rows). Failover is over servers, e.g. for fast access to read-mostly automatic, and failed node recover is automatic. They data. also use solid state disks for additional performance • In any case, shards are replicated, so that data can (like the Schooner MySQL and NoSQL appliances). be recovered in the event of a node crash. As with the other relational products, Clustrix supports Database snapshots are also supported, continuous SQL with fully-ACID transactions. Data distribution or scheduled. and load balancing is transparent to the applicationSome features are still missing, e.g. online schema programmer. Interestingly, they also designed theirchanges are currently limited, and asynchronous WAN system to be seamlessly compatible with MySQL,replication and recovery are not yet implemented. supporting existing MySQL applications and front-endHowever, VoltDB has some promising features that connectors. This could give them a big advantage incollectively may yield an order of magnitude gaining adoption of proprietary hardware.advantage in single-node performance. VoltDB 5.4 ScaleDBeliminates nearly all “waits” in SQL execution, ScaleDB is a new derivative of MySQL underway.allowing a very efficient implementation: Like MySQL Cluster, it replaces the InnoDB engine, • The system is designed for a database that fits in and uses clustering of multiple servers to achieve (distributed) RAM on the servers, so that the scalability. ScaleDB differs in that it requires disks system need never wait for the disk. Indexes and shared across nodes. Every server must have access to record structures are designed for RAM rather
  11. 11. every disk. This architecture has not scaled very well The major RDBMSs (DB2, Oracle, SQL Server) alsofor Oracle RAC, however. include some horizontal scaling features, either shared-ScaleDB’s sharding is automatic: more servers can be nothing, or shared-disk.added at any time. Server failure handling is also 5.8 Summaryautomatic. ScaleDB redistributes the load over MySQL Cluster uses a “shared nothing” architectureexisting servers. for scalability, as with most of the other solutions inScaleDB supports ACID transactions and row-level this section, and it is the most mature solution here.locking. It has multi-table indexing (which is possible VoltDB looks promising because of its horizontaldue to the shared disk). scaling as well as a bottom-up redesign to provide very5.5 ScaleBase high per-node performance. Clustrix looks promisingScaleBase takes a novel approach, seeking to achieve as well, and supports solid state disks, but it is basedthe horizontal scaling with a layer entirely on top of on proprietary software and hardware.MySQL, instead of modifying MySQL. ScaleBase Limited information is available about ScaleDB,includes a partial SQL parser and optimizer that shards NimbusDB, and ScaleBase at this point; they are at antables over multiple single-node MySQL databases. early stage.Limited information is available about this new system In theory, RDBMSs should be able to deliverat the time of this writing, however. It is currently a scalability as long as applications avoid cross-nodebeta release of a commercial product, not open source. operations. If this proves true in practice, theImplementing sharding as a layer on top of MySQL simplicity of SQL and ACID transactions would giveintroduces a problem, as transactions do not span them an advantage over NoSQL for most applications.MySQL databases. ScaleBase provides an option fordistributed transaction coordination, but the higher- 6. USE CASESperformance option provides ACID transactions onlywithin a single shard/server. No one of these data stores is best for all uses. A user’s prioritization of features will be different5.6 NimbusDB depending on the application, as will the type ofNimbusDB is another new relational system. It uses scalability required. A complete guide to choosing aMVCC and distributed object based storage. SQL is data store is beyond the scope of this paper, but in thisthe access language, with a row-oriented query section we look at some examples of applications thatoptimizer and AVL tree indexes. fit well with the different data store categories.MVCC provides transaction isolation without the needfor locks, allowing large scale parallel processing. 6.1 Key-value Store Example Key-value stores are generally good solutions if youData is horizontally segmented row-by-row into have a simple application with only one kind of object,distributed objects, allowing multi-site, dynamic and you only need to look up objects up based on onedistribution. attribute. The simple functionality of key-value stores5.7 Other Systems may make them the simplest to use, especially ifGoogle has recently created a layer on BigTable called you’re already familiar with memcached.Megastore. Megastore adds functionality that brings As an example, suppose you have a web applicationBigTable closer to a (scalable) relational DBMS in that does many RDBMS queries to create a tailoredmany ways: transactions that span nodes, a database page when a user logs in. Suppose it takes severalschema defined in a SQL-like language, and seconds to execute those queries, and the user’s data ishierarchical paths that allow some limited join rarely changed, or you know when it changes becausecapability. Google has also implemented a SQL updates go through the same interface. Then youprocessor that works on BigTable. There are still a lot might want to store the user’s tailored page as a singleof differences between Megastore / BigTable object in a key-value store, represented in a manner“NoSQL” and scalable relational systems, but the gap that’s efficient to send in response to browser requests,seems to be narrowing. and index these objects by user ID. If you store theseMicrosoft’s Azure Tables product provides horizontal objects persistently, then you may be able to avoidscaling for both reads and writes, using a partition key, many RDBMS queries, reconstructing the objects onlyrow key, and timestamps. Tables are stored “in the when a user’s data is updated.cloud” and can sync multiple databases. There is no Even in the case of an application like Facebook,fixed schema: rows consist of a list of property-value where a user’s home page changes based on updatespairs. Due to the timing of the original version of this made by the user as well as updates made by others, itpaper, Azure is not covered here. may be possible to execute RDBMS queries just once
  12. 12. when the user logs in, and for the rest of that session the partitioning is most easily achieved with anshow only the changes made by that user (not by other extensible record store like HBase or HyperTable.users). Then, a simple key-value store could still beused as a relational database cache. 6.4 Scalable RDBMS Example The advantages of relational DBMSs are well-known:You could use key-value stores to do lookups based onmultiple attributes, by creating additional key-value • If your application requires many tables withindexes that you maintain yourself. However, at that different types of data, a relational schemapoint you probably want to move to a document store. definition centralizes and simplifies your data definition, and SQL greatly simplifies the6.2 Document Store Example expression of operations that span tables.A good example application for a document store • Many programmers are already familiar withwould be one with multiple different kinds of objects SQL, and many would argue that the use of SQL(say, in a Department of Motor Vehicles application, is simpler than the lower-level commandswith vehicles and drivers), where you need to look up provided by NoSQL systems.objects based on multiple fields (say, a driver’s name, • Transactions greatly simplify coding concurrentlicense number, owned vehicle, or birth date). access. ACID semantics free the developer fromAn important factor to consider is what level of dealing with locks, out-of-date data, updateconcurrency guarantees you need. If you can tolerate collisions, and consistency.an “eventually consistent” model with limited • Many more tools are currently available foratomicity and isolation, the document stores should relational DBMSs, for report generation, forms,work well for you. That might be the case in the DMV and so on.application, e.g. you don’t need to know if the driverhas new traffic violations in the past minute, and it As a good example for relational, imagine a morewould be quite unlikely for two DMV offices to be complex DMV application, perhaps with a queryupdating the same driver’s record at the same time. interface for law enforcement that can interactivelyBut if you require that data be up-to-date and search on vehicle color, make, model, year, partialatomically consistent, e.g. if you want to lock out license plate numbers, and/or constraints on the ownerlogins after three incorrect attempts, then you need to such as the county of residence, hair color, and sex.consider other alternatives, or use a mechanism such as ACID transactions could also prove valuable for aquorum-read to get the latest data. database being updated from many locations, and the aforementioned tools would be valuable as well. The6.3 Extensible Record Store Example definition of a common relational schema andThe use cases for extensible record stores are similar to administration tools can also be invaluable on a projectthose for document stores: multiple kinds of objects, with many programmers.with lookups based on any field. However, the These advantages are dependent, of course, on aextensible record store projects are generally aimed at relational DBMS scaling to meet your applicationhigher throughput, and may provide stronger needs. Recently-reported benchmarks on VoltDB,concurrency guarantees, at the cost of slightly more Clustrix, and the latest version of MySQL Clustercomplexity than the document stores. suggest that scalability of relational DBMSs is greatlySuppose you are storing customer information for an improving. Again, this assumes that your applicationeBay-style application, and you want to partition your does not demand updates or joins that span manydata both horizontally and vertically: nodes; the transaction coordination and data movement • You might want to cluster customers by country, for that would be prohibitive. However, the NoSQL so that you can efficiently search all of the systems generally do not offer the possibility of customers in one country. transactions or query joins across nodes, so you are no • You might want to separate the rarely-changed worse off there. “core” customer information such as customer addresses and email addresses in one place, and 7. CONCLUSIONS put certain frequently-updated customer We have covered over twenty scalable data stores in information (such as current bids in progress) in a this paper. Almost all of them are moving targets, with different place, to improve performance. limited documentation that is sometimes conflicting, soAlthough you could do this kind of horizontal/vertical this paper is likely out-of-date if not already inaccuratepartitioning yourself on top of a document store by at the time of this writing. However, we will attempt acreating multiple collections for multiple dimensions, snapshot summary, comparison, and predictions in this section. Consider this a starting point for further study.
  13. 13. 7.1 Some Predictions The argument for relational over NoSQL goesHere are some predictions of what will happen with the something like this:systems we’ve discussed, over the next few years: • If new relational systems can do everything that a • Many developers will be willing to abandon NoSQL system can, with analogous performance globally-ACID transactions in order to gain and scalability, and with the convenience of scalability, availability, and other advantages. The transactions and SQL, why would you choose a popularity of NoSQL systems has already NoSQL system? demonstrated this. Customers tolerate airline • Relational DBMSs have taken and retained over-booking, and orders that are rejected when majority market share over other competitors in items in an online shopping cart are sold out the past 30 years: network, object, and XML before the order is finalized. The world is not DBMSs. globally consistent. • Successful relational DBMSs have been built to • NoSQL data stores will not be a “passing fad”. handle other specific application loads in the past: The simplicity, flexibility, and scalability of these read-only or read-mostly data warehousing, OLTP systems fills a market niche, e.g. for web sites on multi-core multi-disk CPUs, in-memory with millions of read/write users and relatively databases, distributed databases, and now simple data schemas. Even with improved horizontally scaled databases. relational scalability, NoSQL systems maintain • While we don’t see “one size fits all” in the SQL advantages for some applications. products themselves, we do see a common • New relational DBMSs will also take a significant interface with SQL, transactions, and relational share of the scalable data storage market. If schema that give advantages in training, transactions and queries are generally limited to continuity, and data interchange. single nodes, these systems should be able to scale The counter-argument for NoSQL goes something like [5]. Where the desire for SQL or ACID this: transactions are important, these systems will be the preferred choice. • We haven’t yet seen good benchmarks showing that RDBMSs can achieve scaling comparable • Many of the scalable data stores will not prove with NoSQL systems like Google’s BigTable. “enterprise ready” for a while. Even though they fulfill a need, these systems are new and have not • If you only require a lookup of objects based on a yet achieved the robustness, functionality, and single key, then a key-value store is adequate and maturity of database products that have been probably easier to understand than a relational around for a decade or more. Early adopters have DBMS. Likewise for a document store on a simple already seen web site outages with scalable data application: you only pay the learning curve for store failures, and many large sites continue to the level of complexity you require. “roll their own” solution by sharding with existing • Some applications require a flexible schema, RDBMS products. However, some of these new allowing each object in a collection to have systems will mature quickly, given the great deal different attributes. While some RDBMSs allow of energy directed at them. efficient “packing” of tuples with missing • There will be major consolidation among the attributes, and some allow adding new attributes at systems we’ve described. One or two systems will runtime, this is uncommon. likely become the leaders in each of the categories. • A relational DBMS makes “expensive” (multi- It seems unlikely that the market and open source node multi-table) operations “too easy”. NoSQL community will be able to support the sheer systems make them impossible or obviously number of products and projects we’ve studied expensive for programmers. here. Venture capital and support from key • While RDBMSs have maintained majority market players will likely be a factor in this consolidation. share over the years, other products have For example, among the document stores, established smaller but non-trivial markets in areas MongoDB has received substantial investment this where there is a need for particular capabilities, year. e.g. indexed objects with products like BerkeleyDB, or graph-following operations with7.2 SQL vs NoSQL object-oriented DBMSs.SQL (relational) versus NoSQL scalability is acontroversial topic. This paper argues against both Both sides of this argument have merit.extremes. Here is some more background to supportthis position.
  14. 14. 7.3 Benchmarking Table 1 below compares the concurrency control, dataGiven that scalability is the focus of this paper and of storage medium, replication, and transactionthe systems we discuss, there is a “gaping hole” in our mechanisms of the systems. These are difficult toanalysis: there is a scarcity of benchmarks to summarize in a short table entry without over-substantiate the many claims made for scalability. As simplifying, but we compare as follows.we have noted, there are benchmark results reported on For concurrency:some of the systems, but almost none of the • Locks: some systems provide a mechanism tobenchmarks are run on more than one system, and the allow only one user at a time to read or modify anresults are generally reported by proponents of that one entity (an object, document, or row). In the casesystem, so there is always some question about their of MongoDB, a locking mechanism is provided atobjectivity. a field level.In this paper, we’ve tried to make the best comparisons • MVCC: some systems provide multi-versionpossible based on architectural arguments alone. concurrency control, guaranteeing a read-However, it would be highly desirable to get some consistent view of the database, but resulting inuseful objective data comparing the architectures: multiple conflicting versions of an entity if • The trade-offs between the architectures are multiple users modify it at the same time. unclear. Are the bottlenecks in disk access, • None: some systems do not provide atomicity, network communication, index operations, allowing different users to modify different parts locking, or other components? of the same object in parallel, and giving no • Many people would like to see support or guarantee as to which version of data you will get refutation of the argument that new relational when you read. systems can scale as well as NoSQL systems. • ACID: the relational systems provide ACID • A number of the systems are new, and may not transactions. Some of the more recent systems do live up to scalability claims without years of this with no deadlocks and no waits on locks, by tuning. They also may be buggy. Which are truly pre-analyzing transactions to avoid conflicts. mature? For data storage, some systems are designed for • Which systems perform best on which loads? Are storage in RAM, perhaps with snapshots or replication open source projects able to produce highly to disk, while others are designed for disk storage, performant systems? perhaps caching in RAM. RAM-based systemsPerhaps the best benchmark to date is from Yahoo! typically allow use of the operating system’s virtualResearch [2], comparing PNUTS, HBASE, Cassandra, memory, but performance appears to be very poorand sharded MySQL. Their benchmark, YCSB, is when they overflow physical RAM. A few systemsdesigned to be representative of web applications, and have a pluggable back end allowing different datathe code is available to others. Tier 1 of the storage media, or they require a standardizedbenchmark measures raw performance, showing underlying file system.latency characteristics as the server load increases. Replication can insure that mirror copies are always inTier 2 measures scaling, showing how the sync (that is, they are updated lock-step and anbenchmarked system scales as additional servers are operation is not completed until both replicas areadded, and how quickly the system adapts to additional modified). Alternatively, the mirror copy may beservers. updated asynchronously in the background.In this paper, I’d like to make a “call for scalability Asynchronous replication allows faster operation,benchmarks,” suggesting YCSB as a good basis for the particular for remote replicas, but some updates maycomparison. Even if the YCSB benchmark is run by be lost on a crash. Some systems update local copiesdifferent groups who may not duplicate the same synchronously and geographically remote copieshardware Yahoo specified, the results will be asynchronously (this is probably the only practicalinformative. solution for remote data). Transactions are supported in some systems, and not in7.4 Some Comparisons others. Some NoSQL systems provide something inGiven the quickly-changing landscape, this paper will between, where “Local” transactions are supportednot attempt to argue the merits of particular systems, only within a single object or shard.beyond the comments already made. However, acomparison of the salient features may prove useful, so Table 1 compares the systems on these fourwe finish with some comparisons. dimensions.
  15. 15. Table 1. System Comparison (grouped by category) Updates and corrections to this paper will be posted Conc Data Repli- Tx there as well. The landscape for scalable data stores is System likely to change significantly over the next two years! Contol Storage cation Redis Locks RAM Async N 8. ACKNOWLEDGMENTS Scalaris Locks RAM Sync L I’d like to thank Len Shapiro, Jonathan Ellis, Dan Tokyo Locks RAM or Async L DeMaggio, Kyle Banker, John Busch, Darpan Dinker, disk David Van Couvering, Peter Zaitsev, Steve Yen, and Scott Jarr for their input on earlier drafts of this paper. Voldemort MVCC RAM or Async N Any errors are my own, however! I’d also like to BDB thank Schooner Technologies for their support on this Riak MVCC Plug-in Async N paper. Membrain Locks Flash + Sync L Disk 9. REFERENCES Membase Locks Disk Sync L [1] F. Chang et al, “BigTable: A Distributed Storage System for Structured Data”, Seventh Symposium Dynamo MVCC Plug-in Async N on Operating System Design and Implementation, SimpleDB None S3 Async N November 2006. MongoDB Locks Disk Async N [2] B. Cooper et al, “Benchmarking Cloud Serving Systems with YCSB”, ACM Symposium on Cloud Couch DB MVCC Disk Async N Computing (SoCC), Indianapolis, Indiana, June Terrastore Locks RAM+ Sync L 2010. HBase Locks Hadoop Async L [3] B. DeCandia et al, “Dynamo: Amazon’s Highly Available Key-Value Store”, Proceedings 21st HyperTable Locks Files Sync L ACM SIGOPS Symposium on Operating Systems Cassandra MVCC Disk Async L Principles, 2007. BigTable Locks+s GFS Sync+ L [4] S. Gilbert and N. Lynch, “Brewer’s conjecture and tamps Async the feasibility of consistent, available, and partition-tolerant web services”, ACM SIGACT PNUTs MVCC Disk Async L News 33, 2, pp 51-59, March 2002. MySQL ACID Disk Sync Y [5] M. Stonebraker and R. Cattell, “Ten Rules for Cluster Scalable Performance in Simple Operation VoltDB ACID, RAM Sync Y Datastores”, Communications of the ACM, June no lock 2011. Clustrix ACID, Disk Sync Y 10. SYSTEM REFERENCES no lock The following table provides web information sources ScaleDB ACID Disk Sync Y for all of the DBMSs and data stores covered in the ScaleBase ACID Disk Async Y paper, even those peripherally mentioned, alphabetized by system name. The table also lists the licensing NimbusDB ACID, Disk Sync Y model (proprietary, Apache, BSD, GPL), which may no lock be important depending on your application.Another factor to consider, but impossible to quantifyobjectively in a table, is code maturity. As notedearlier, many of the systems we discussed are only a System License Web site for more informationcouple of years old, and are likely to be unreliable. For Azure Prop blogs.msdn.com/b/windowsazurethis reason, existing database products are often a storage/better choice if they can scale for your application’s Berkeley DB BSD oss.oracle.com/berkeley-db.htmlneeds. BigTable Prop labs.google.com/papers/bigtable.Probably the most important factor to consider is htmlactual performance and scalability, as noted in the Cassandra Apache incubator.apache.org/cassandradiscussion of benchmarking. Benchmark references Clustrix Prop clustrix.comwill be added to the author’s website CouchDB Apache couchdb.apache.orgcattell.net/datastores as they become available.
  16. 16. Dynamo Internal portal.acm.org/citation.cfm?id=1 OrientDB Apache orienttechnologies.com 294281 PNUTs Internal research.yahoo.com/node/2304GemFire Prop gemstone.com/products/gemfire Redis BSD code.google.com/p/redisHBase Apache hbase.apache.org Riak Apache riak.basho.comHyperTable GPL hypertable.org Scalaris Apache code.google.com/p/scalarisMembase Apache membase.com ScaleBase Prop scalebase.comMembrain Prop schoonerinfotech.com/products/ ScaleDB GPL scaledb.comMemcached BSD memcached.org SimpleDB Prop amazon.com/simpledbMongoDB GPL mongodb.org Terrastore Apache code.google.com/terrastoreMySQL GPL mysql.com/cluster Tokyo GPL tokyocabinet.sourceforge.netCluster Versant Prop versant.comNimbusDB Prop nimbusdb.com Voldemort None project-voldemort.comNeo4j AGPL neo4j.org VoltDB GPL voltdb.com

×