Why are we here? What is wrong with the status quo?
What is wrong with the scale up? It worked before? Moore theorem is still valid and according to predictions, will be for some time to come.
CAP theorem, also known as (Eric) Brewer&apos;s theorem Consistency: All clients always have the same view of the data Availability: Each client can always read and write Partitioning: The system works well despite physical network partitions
Commodity hardware – in fact the real commodity hardware. But the industry acknowledges the need of packaging a lot of “commodity” machines in a small space – new computers on a chip models are coming.
According to NOSQL-databases.org lists 122+ databases: Next Generation Databases address some of the following points: being non-relational, distributed, open-source and horizontal scalable. The original intention has been modern web-scale databases. The movement began early 2009 and is growing rapidly. Often more characteristics apply as: schema-free, replication support, easy API, eventually consistency, and more. So the misleading term &quot;NOSQL&quot; (the community now translates it mostly with &quot;not only sql&quot;) should be seen as an alias to something like the definition above.
Twitter generates 7TB/day (2PB+ year) – Hadoop for data analysis, Scribe for logging LinkedIn - Voldemort
Scalability: relational databases were not designed to handle and do not generally cope well with Internet-scale, “big data” applications. Most of the big Internet companies (e.g., Google, Yahoo, Facebook) do not rely on RDBMS technology for this reason.
Cassandra – Facebook Inbox Search Amazon Dynamo: not open source Voldemort: Open-Source implementation of Amazons Dynamo Key-Value Store. Google Big Table: a sparse, distributed multi-dimensional sorted map
Distributed Storage Consistency Models - http://horicky.blogspot.com/2008/08/distributed-storage.html There is a number of client consistency models (http://horicky.blogspot.com/2009/11/nosql-patterns.html)Strict Consistency (one copy serializability): This provides the semantics as if there is only one copy of data. Any update is observed instantaneously. Read your write consistency: The allows the client to see his own update immediately (and the client can switch server between requests), but not the updates made by other clients Session consistency: Provide the read-your-write consistency only when the client is issuing the request under the same session scope (which is usually bind to the same server) Monotonic Read Consistency: This provide the time monotonicity guarantee that the client will only see more updated version of the data in future requests. Eventual Consistency: This provides the weakness form of guarantee. The client can see an inconsistent view as the update are in progress. This model works when concurrent access of the same data is very unlikely, and the client need to wait for some time if he needs to see his previous update.
The Simple Magic of Consistent Hashing - http://www.paperplanes.de/2011/12/9/the-magic-of-consistent-hashing.html Consistent Hashing - http://michaelnielsen.org/blog/consistent-hashing/
NO SQL: What, Why, How
NO-SQL: WHY, WHAT, HOW
Director, Cloud Platforms
What is wrong with SQL?
Is it answering your needs?
Does it fit your solution?
Do you rip the benefits of the
Will it support the needs of your
projects in the future?
Gov't data stored in the US (2009):
more than 800 petabytes
The data doesn’t fit on one node
The data may not fit one rack
Each machine operates independently with minimal
coordination between themselves
There is a need to partition data across lots of machines
There is a limit to RDBMS scale
Scaling up doesn't work
Scaling out with traditional RDBMSs isn't so hot either
Sharding scales, but you lose all the features that make RDBMSs
Sharding and Table partitioning are operationally heavy
If we don't need relational features, we want a distributed
Fallacies of Distributed Computing
1. The network is reliable
2. Latency is zero
3. Bandwidth is infinite
4. The network is secure
5. Topology doesn’t change
6. There is one administrator
7. Transport cost is zero
8. The network is homogeneous
Here Calxeda's EnergyCard atop a HP
Redstone server prototype. Source: Jon Snyder
CNET: Google uncloaks once-secret server
RAM is new Disk, Disk is new
- Jim Gray, (former) manager of Microsoft Research’s
Why it is important
New levels of scalability
Distributed by nature
There’s no need for DBA, no need for complicated SQL
queries and it is fast. Hooray, freedom for the people!
Data models are still important
Interfaces and interoperability - nonexistent
Understand limitations of the technology
OPS are screwed
Advantages of NOSQL
Cheap, easy to implement
Removes impedance mismatch between objects and tables
Quickly process large amounts of data
Data Modeling Flexibility (including schema evolution)
Disadvantages of NOSQL
Data is generally duplicated, potential for inconsistency
No standard language or format for queries
Depends on the application layer to enforce data integrity
Based loosely on documents / POCO
Data model – collections of documents
Based on Graph theory
Data model – graph, nodes, edges, properties
Key Value Stores
Based on DHT (Distributed Hash Table),
Amazon’s Dynamo design
Data model – collection of key value pairs
Based on Google’s BigTable design
Data model - big table, column families
Stop thinking relational
Start thinking about how your data will be used
Optimize for reads? Writes?
Think about your domain objects and business logic in native .Net
Deformalize if needed
Reference entities to other entities, or collections of them
Identify aggregate root(s)
First impressions, but it’s get better over time
Know and use the tools you need for the job at hand
Extra Links and References
Distributed Storage -
Images from here