ę§â¤ Aerocity Call Girls Service Aerocity Delhi â¤ę§ 9999965857 âď¸ Hard And Sexy ...
Â
Cassandra - A decentralized storage system
1. Cassandra - A Decentralized
Structured Storage System
Presented By
Tejaswi Ganne
Latha Muddu
Khulud Alsultan
Rajaramya Janagama
Marmik Patel
Arunit Gupta
Chaitanya Sai
Prashant Malik
Facebook
Avinash Lakshman
Facebook
2. Outline
⢠Data Model.
⢠Cassandra API.
⢠System Architecture:
⢠Partitioning.
⢠Replication.
⢠Membership and Failure Detection.
⢠Bootstrapping and Scaling the Cluster.
⢠Local Persistence.
4. Introduction
â˘Apache Cassandra is an open source distributed storage
system.
â˘Manages very large data spread across many commodity
servers located across many data centers.
â˘Named after the Greek mythological prophet Cassandra.
â˘Initially developed at Facebook to power their Inbox Search
feature, later Facebook open sourced it as Apache Incubator
project.
â˘Features â
â˘High Scalability
â˘High Availability
â˘Fault Tolerant
5. Features
⢠High Scalability: There is no downtime or interruptions to
applications as read and write throughput increases linearly
as new machines get added.
⢠High Availability: It refers to systems which are durable and
likely to operate continuously without any failure for a long
time.
⢠Fault Tolerant: Data is automatically replicated to multiple
nodes, where failed nodes can be replaced within no time.
Replication is supported across multiple data centers.
6. Data Model
â˘Uses a simple data model instead of a full Relational data
model.
⢠A table in Cassandra is a distributed multi dimensional map
indexed by a key.
â˘Value is a structured object.
â˘Operations are atomic on each row per replica.
8. Data Model ContdâŚ
â˘Each row can have different number of columns.
â˘Each column has < Name, Value, Timestamp >.
â˘Columns can be ordered by names and timestamps.
â˘Columns are grouped into Column Families (CF).
â˘Two types of CFs â
â˘Simple Column Family
âHas columns.
âA Column can be accessed using the convention -
Âťcolumn_family : column
â˘Super Column Family
âCF within a CF.
âHas a Simple CF, or another Super CF in it.
âA Column can be accessed using the convention -
Âťcolumn_family : super_column : column
9. Key-Value Model
⢠It is column
oriented NoSQL
system
⢠Row is collection of
columns labeled
with a name
⢠Key is the column
name and a row
must contain at
least 1 column
https://10kloc.wordpress.com/2012/12/25/cassandra-chapter-three-data-model/
10. Related Work
â˘Amazon Dynamo
Dynamo is a storage system that is used by Amazon to store
and retrieve user shopping carts. It requires both read and
write operations for managing timestamps.
â˘Google Chubby
GFS uses a simple design with a single master server for
hosting the entire metadata and where the data is split into
chunks and stored in chunk servers. It is made fault tolerant
using the Chubby abstraction. Chubby achieves fault-tolerant
through replication.
14. Cassandra Query Language
⢠The Cassandra Query Language (CQL) is the
primary language for communicating with the
Cassandra database.
CQL Statements :
ďData Definition Statements
ďData Manipulation Statements
ďQueries
15. Data Definition Statements
⢠Create Keyspace
⢠Use
⢠Alter Keyspace
⢠Drop Keyspace
⢠Create Table
⢠Alter Table
⢠Drop Table
⢠Create Type
⢠Alter Type
⢠Drop Type
⢠Create Trigger
⢠Drop Trigger
⢠Create Function
⢠Drop Function
⢠Create Aggregate
⢠Drop Aggregate
16. Data Definition Statements(Cont..)
1. Create Keyspace :
cqlsh> CREATE KEYSPACE sample_demo with
replication = {â class â:
â SimpleStrategy â , â replicaton_factor â : 3 };
2.Use Keyspace :
cqlsh> USE sample_demo ;
3.Alter Keyspace :
cqlsh>ALTER KEYSPACE sample_demo
WITH replication = {'class': 'SimpleStrategy',
'replication_factor' : 5};
17. Data Definition Statements(Cont..)
4. Drop Keyspace :
DROP KEYSPACE sample_demo ;
5. Create Table :
CREATE TABLE presentors_list ( firstname text,
lastname text, classid int, email text, PRIMARY KEY
(lastname));
6. Alter Table :
ALTER TABLE presentors_list ADD city text ;
18. Data Definition Statements(Cont..)
7 . Drop Table :removes a table.
DROP TABLE presentors_list;
8. Truncate Table : removes all data from a
table.
TRUNCATE presentors_list;
21. Data Manipulation Statements(Cont..)
3.Delete :
DELETE FROM presentors_list USING
TIMESTAMP WHERE lastname=âupadrastaâ ;
4.Batch :
BEGIN BATCH
INSERT QUERY;
UPDATE QUERY;
DELETEQUERY;
APPLY BATCH
26. System Architecture
⢠The core distributed systems techniques:
ď§ Partitioning
ď§ Replication
ď§ Membership and Failure handling
ď§ Bootstrapping and Scaling the Cluster.
ď§ Local Persistence.
27. System Architecture
⢠These modules work in synchrony to handle
read/write requests.
⢠Read/write request for a key gets routed to any
node in the cluster.
⢠The node determines the replica for this
particular key.
28. System Architecture
⢠For writes:
Routes the requests to the replicas and waits for a
quorum of replicas to acknowledge the completion
of the writes.
⢠For reads:
⢠Routes the requests to the closest replica
OR
⢠Routes the requests to all replicas and waits for
a quorum of responses
29. Partitioning
⢠Scale incrementally.
⢠Dynamically partition the data over the set of nodes
in cluster.
⢠Partitions data using consistent hashing.
⢠Uses an order preserving hash function.
⢠Output range is treated as ring.
⢠Each node is assigned a random value which
represents its position on the ring.
31. Consistent Hashing
⢠Example:
Cassandra assigns a hash value to each partition key:
if you have the following data:
https://docs.datastax.com/en/cassandra/2.0/cassandra/architecture/architectureDataDistributeHashing_c.htm
l
33. Consistent Hashing
⢠Cassandra places the data on each node according to the value of
the partition key and the range that the node is responsible for.
https://docs.datastax.com/en/cassandra/2.0/cassandra/architecture/architectureDataDistributeHashin
g_c.html
34. Consistent hashing
Advantage:
⢠Departure or arrival of a node only affects its
immediate neighbors and others remain
unaffected.
Some Challenges:
⢠The random position assignment of each node on
the ring leads to non-uniform data and load
distribution.
⢠The heterogeneity in the performance of nodes.
35. Partitioning
⢠Two ways to address this issue:
ď§ Nodes get assigned to multiple positions in the
circle.
ď§ Analyze load information on the ring and have
lightly loaded nodes move on the ring to alleviate
heavily loaded nodes
38. Replication
⢠How data is duplicated across nodes.
⢠Why replication?
⢠To achieve high availability and durability.
⢠Ensure fault tolerance and no failure by
replicating one or more copies of every row in
a column family across nodes in cluster
39. Replication
⢠How to achieve replication?
⢠Each data item is replicated at N (replication factor)
nodes.
⢠Coordinator node is responsible for the replication of
data items.
⢠It also replicates keys across N-1 nodes.
42. Rack Aware
Replica 1
Replica 2
Rack 1
N1 N2 N3
Rack2
N4 N5 N6
⢠No two replicas should lie in the same rack.
N- Nodes
43. Data Center Aware
⢠No two replicas should lie in the same datacenter.
Rack1
Rack2
Datacenter 1
N1 N2 N4N3
N6N5 N7 N8
Datacenter 2
Replica1
Replica2
44. Advantages
⢠Cassandra provides durability guarantees in the
presence of node failures and network partitions.
⢠The storage nodes are spread across multiple
datacenters and are connected through high speed data
links.
⢠This scheme of replicating across multiple datacenters
allows us to handle entire data center failures without
any outage.
47. What is Membership?
⢠Can be split into two parts:
1. Service Discovery
2. Failure Detection
Service Discovery
⢠Service Discovery comes into picture when new node is set
up and added to cluster
⢠Based on Scuttlebutt Reconciliation, a very efficient anti-
antropy gossip protocol based mechanism
⢠Scuttlebutt has very efficient utilization of CPU and gossip
channel
48. Gossip Protocol and Scuttlebutt
Reconciliation
Gossip Protocol
⢠Protocol that Cassandra uses to discover information about other
nodes
⢠Information transferred from node to the node it knows about
⢠Not only for Membership, but also used to disseminate other
system related to control state such as health, tokens, addresses,
data size etc.
Scuttlebutt Reconciliation
⢠Not necessary that two participants in a gossip exchange most
recent mapping than those of the peer
⢠Inspired by real life rumor spreading
⢠Repair replicated data by comparing differences
Robbert van Renesse, Dan Mihai Dumitriu, Valient Gough, and Chris Thomas. Efficient reconciliation
and flow control for anti-entropy protocols
49. Failure Detection
⢠Comes into picture when the node is was taken down for
maintenance, or fails due to an error
⢠Mechanism by which a node can locally determine if any
other node in system is up or down
⢠Also used to avoid attempts to communicate with
unreachable node
⢠Uses failure detector which is modified version of Ό Accrual
Failure Detector
⢠Gossip protocol is used for exchanging information
50. ÎŚ Accrual Failure Detector
⢠Based on very simple principle
⢠Does not emit a Boolean value stating a node is up or down,
but emits a value which represents a suspicion level for
nodes
⢠Value is defines as Ό
⢠Idea is to express the value of Ό on a scale that is
dynamically adjusted to reflect network and load condition
⢠Difference between traditional failure detector and accrual
failure detector is which component of the system does what
part of failure detection
51. Traditional Failure Detector vs
Accrual Failure Detector
⢠In Traditional Failure Detector, the
monitoring and interpretation are
combined and output of this
combination is Boolean.
⢠Application cannot do any
interpretation as monitored
information is already being
interpreted
Traditional Failure
Detector
Accrual Failure Detector
⢠Accrual Failure Detector provides
lower level abstraction that avoids
the interpretation of monitoring
information
⢠Value associated with each
process represents suspicion level
which is left for application to
interpret
http://www.jaist.ac.jp/~defago/files/pdf/IS_RR_2004_010.pdf
52. Properties of ÎŚ
⢠Ό represents likelihood that node A is wrong about node Bâs
state
⢠Assume when Ό = 1, A will make mistake in deciding state of B
is 10%, then likelihood is about 1% when ÎŚ = 2 , 0.1% when ÎŚ
= 3 and so on
⢠Node maintains a sliding window of inter-arrival times of gossip
messages to calculate the value of ÎŚ
⢠Ό is very good in accuracy and speed
⢠Also adjust well to network conditions and server load
conditions
⢠Cassandra approximate Ό using exponential distribution
55. Bootstrapping
What is Bootstrapping?
Adding new nodes is called âBootstrappingâ
Ways of Adding new node
There are two ways of adding node :
â New node gets assigned a random token which gives its position in the ring. It
gossips its location to rest of the ring where the information is exchanged about
one another.
â New node reads its config file to contact itâs initial contact points.
⢠New nodes are added manually by administrator via CLI or Web interface provided
by Cassandra.
http://s3.amazonaws.com/ppt-download/cassandraekaterinberg2013-131212053553-phpapp01.pdf?response-content-
disposition=attachment&Signature=7pB%2BhMgGqV1vxcRUaqCbCt2%2BH6o%3D&Expires=1458678552&AWSAccessKeyId=AKIAJ6D6SEMXSASX
HDAQ
56. Bootstrapping Contd..
⢠These initial contact points are known as Seeds, which is basically used by newly added
node to know each other, where ultimate goal for all nodes in the cluster is to discover
one another.
⢠Seeds can also come from configuration service like Zookeeper, which is a centralized
service for maintaining configuration information, naming, providing distributed
synchronization, and providing group services.
âBecause Coordinating Distributed Systems is a Zooâ
Google images
57. Facts!!!
⢠Comparison with Amazonâs Dynamo which is a
highly available key-value structured storage system.
âDynamoâs load is no where close to what we see in
practice over here at Facebook.â âAvinash Lakshman
nosqlmatters2012-130102154135-phpapp01.pdf
58. Configuration
In addition to seeds, you'll also need to configure the IP interface to listen on for
Gossip and CQL, (listen_address and rpc_address respectively).Use
listen_address that will be reachable from the listen_address used on all other nodes, a
nd a rpc_address that will be accessible to clients.
Once everything is configured and the nodes are running, use
the bin/nodetool status utility to verify a properly connected cluster. For example:
https://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_add_node_to_cl
uster_t.html
59. Environment
⢠Node outages occurred are often transient but may last for extended
intervals.
⢠A network outage rarely signifies a permanent departure and should not
result in re-balancing of the partition assignment or repair of the
unreachable replicas.
⢠Manual errors could result in unintentional startup of new Cassandra nodes.
As a result an explicit mechanism is considered appropriate to initiate the
addition and removal of nodes from a Cassandra Instance.
⢠Administrator â uses a command line tool or a browser to connect to a
Cassandra node and issue a membership change to join or leave the cluster.
60. Scaling the cluster
⢠Whenever a new node is added into the system, it gets assigned a token such that it can alleviate heavily
loaded node.
⢠New node will take the range which other node were responsible for before.
⢠Cassandra bootstrapping algorithm is initiated from any other node in the system either using a command
line utility or web dashboard.
⢠The node giving up the data streams the data over to the new nodes using kernel copy techniques.
Cassandra Ring showing scalability.
Scaling the Cluster
https://www.google.com/search?q=scalability+in+cassandra&biw=1366&bih=667&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjtztnc4dfLAhWEzoMKHQ1zD4MQ_AUIBygC#imgrc=qusi2v
eDeVAH4M%3A
61. Future
What is the Future?
⢠Operational experience has shown that data can be transferred at the rate of
40MB/sec from a single node. Work is going on to have multiple replicas take part
in the bootstrap transfer by parallelizing the effort, similar to bit torrent which is a
p2p system used to transfer large files to thousands of location in a short period of
time
⢠Facebook uses bit torrent to distribute updates to Facebook servers.
âBit Torrent is fantastic for this, itâs really great,â Cook said. âItâs âsuper-duperâfast
and it allows us to alleviate a lot of scaling concerns weâve had in the past, where it
took forever to get code to the webservers before you could even boot it up and run it.â
62. Virtual nodes in Cassandra
⢠One of the new features slated for Cassandra 1.2âs release was virtual nodes
(vnodes) where there was paradigm change from one token or range per node, to
many per node. Within a cluster these can be randomly selected and be non-
contiguous, giving us many smaller ranges that belong to each node.
Advantage?
ďą Use of Heterogeneous machines in a cluster.
ďą Node Failures and backing up.
http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2
65. Local Persistence
â Cassandra depends on the local file system for data
persistence.
â The data is represented on disk using a format that lends
itself to efficient data retrieval.
â For a data store to be considered persistent, it must write to
non-volatile storage.
66. Cassandra â more than one server
⢠All the nodes participate in a
cluster
⢠They are independent â
share nothing
⢠Add or remove as needed
⢠If you need more capacity?
Add a server
67. Focus on singer server
http://www.slideshare.net/patrickmcfadin/introduction-to-cassandra-2014
68. Write operation
⢠Firstly, it writes into the commit
log
⢠Then it puts into the in-memory
data structure i.e. memtable
⢠The memtable is identified by the
primary key
⢠Acknowledge back to the client
⢠This is a simple process and thatâs
what make scaling is easier
⢠As memtable start to fill up there
is a flush process
⢠Flush process writes the memtable
to a file called SS table i.e. Sorted
String
⢠The writes here are sequential
writes
http://www.slideshare.net/sameiralk/cassendra
69. Example
Update users
Set firstname = âChaitanyaâ
Where id = âcm7cdâ
write Rowkey,Column
(id = âcm7cdâ,
firstname = âChaitanyaâ)
76. Compaction
⢠Compaction is process which takes all the SSTables, does a
sequential reads back into the memtable of both files, do
merge sort, picks the latest timestamp file and writes a brand
new file.
⢠It deletes the old files.
80. Read Operation
⢠It look up in the memtable before going into the files on the disk
⢠Look up is done in order of newest to oldest
⢠Cassandra checks an in-memory data structure called Bloom filter
⢠Bloom filter can quickly tell you whether the key exists in a file
⢠A key in a column family have many columns so in order to prevent
scanning all the columns it maintain column indices
⢠In a cluster, client can ask any node to retrieve the data
Consistency Levels
⢠Set every read and write like ONE, TWO, ALL, QUORUM->51% etc.
82. Summary
⢠Established high scalability, performance and wide
applicability
⢠Very high update throughput, delivering low latency
⢠Future works:
â Adding compression
â Support atomicity across keys
â Secondary index support
84. Lakshman, Avinash, and Prashant Malik. "Cassandra: a decentralized structured
storage system." ACM SIGOPS Operating Systems Review 44.2 (2010): 35-40.
For More Information