Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Migrating from a Relational Database to Cassandra: Why, Where, When and How

143 views

Published on

Everything you need to know about moving from a relational database to Cassandra.

You may be very familiar with what Cassandra is, or the name might just be a buzzword you've heard used when discussing databases. Regardless of your familiarity with Cassandra, this database should be the first tool you consider when you need scalability and high availability without compromising performance.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Migrating from a Relational Database to Cassandra: Why, Where, When and How

  1. 1. Migrating from Relational to Cassandra ( SQL to CQL ) Rahul Xavier Singh Anant Corporation
  2. 2. TOC Core Concepts Detect Bad Models Data Modeling in Cassandra Synthetic Sharding Key DesignCommon Patterns Avoid tombstones
  3. 3. Business Platform Success We build realtime business platforms, connecting customer experiences, information systems with realtime data & analytics platforms like … Cassandra, Kafka, Spark
  4. 4. Platform Thinking
  5. 5. How? Project Information Client Service Information Corporate Guides Collaborative Documents Assets & Files Corporate Assets Business Platform ● Curate framework of systems. ● Work with a vetted team of experts. ● Connect it all together. ● Focus on finding, analyzing, and acting on knowledge & communication towards business success.
  6. 6. Streamline. Organize. Unify. Business Platform
  7. 7. Who we help Succeed
  8. 8. Differences between Relational and Cassandra
  9. 9. Typical Use Cases in RDBMS / Relational 01 Relational Use Cases 1. Master / Detail - 1 to N | has 2. Document Attributes - 1 to 1 | is/has 3. Lookup - n to 1 | is/ is part of 4. Connection - n to n | is /is related to /has 5. ….
  10. 10. Why Cassandra 01 1. Familiar Structure: CQL tables ~ SQL tables 2. Familiar Query language : CQL ~ SQL 3. Schema Constrained Queries : No arbitrary queries, joins, transactions 4. CQL is a Subset of SQL : CQL < sql
  11. 11. Differences between RDBMS / Cassandra 01 SQL / Relational / RDBMS 1. Reduce redundancy 2. Store once / Relate / Query 3. ACID : “Atomicity, Consistency, Isolation, and Durability” 4. Immediate consistency 5. Structured with types 6. Set schema for all Rows 7. Scale : master slave , limited scale 8. Joins, Views, Arbitrary Queries CQL / Non-Relational / Cassandra 1. Store as often as you need. Redundancy okay! 2. Duplicate as needed. 3. Predefined Queries (no Joins) 4. BASE : “Basically Available, Soft state, Eventual consistency” 5. Tunable consistency 6. Structured w/ types + Semi- Structured (Maps, Sets, Collections) 7. Malleable schema (via New Rows, Columns) 8. Masterless + Multi DC (Workload or Regional) 9. Globally Scalable
  12. 12. Cassandra Core Concepts
  13. 13. Cassandra Architecture Cluster / Data Centers 01Cassandra is not for tiny data. Do you NEED: 1. Fast read and write of terabytes of data? 2. Replication / availability around the world? 3. Never go down, always up? Don’t use Cassandra: 1. If you have gigabytes of data. 2. Your application can chill in one datacenter. 3. Your system can go down whenever it wants. 4. Want to be cool.
  14. 14. Cassandra Data Model Keyspaces & Tables 02 Cassandra Tables / Column Families look like SQL Server / MySQL / Postgres tables & databases. They are not. 1. CQL Supports queries with a Primary and optional Clustering Key 2. CQL Does not support arbitrary queries on columns. 3. Cassandra shouldn’t be managing more than a 100- 150 tables across any number of keyspaces.
  15. 15. Cassandra Operations Read / Write Paths 03 Cassandra does these things well. 1. Write: It writes data in an immutable way at first into a commit log, adds it to the memtable to be available, and then flushes it to disk: sstables. 2. Read: It figures out if the data is on a node (Orlando Bloom filter is involved) and reads from different sstables, reconciles the immutable data + deletes into the latest data. 3. It spreads the load around the ring so that you can hundreds of nodes doing this and not break a sweat: beast like performance.
  16. 16. Cassandra Operational Pitfalls Visualized
  17. 17. Wide Partitions 01
  18. 18. Data Skew 02
  19. 19. Tombstones 03
  20. 20. Monitoring and Continuous Detection 04 How to check for tombstones. 1. Monitor using cfstats (*Tombstones) 2. Monitor using syslog (“Tombstone Warn Threshold”) 3. Monitor using OpsCenter, Prometheus + Grafana, Datadog, Sematext Bad key design can lead to really, really bad data skew. In some cases if the number of keys is only 1 or 2, that means that the data only exists in one or two partitions replicated. 1. Monitor using cfstats (NumberOfKeys, SpaceUsedLive, ReadCounts, WriteCounts) 2. Monitor using OpsCenter, Prometheus + Grafana, Datadog, Sematext Wide partitions will completely screw you over on reads and take a node out if there’s traffic. 1. Monitor using cfstats (CompactedPartitionMaximumBytes) 2. Monitor in system.log “Compacting large partition” 3. Monitor using toppartitions 4. Monitor using OpsCenter, Prometheus + Grafana, Datadog, Sematext https://blog.anant.us/resources-for-monitoring-datastax-cassandra-spark-solr-performance/
  21. 21. Monitoring Options Opscenter, Grafana + Prometheus, ...
  22. 22. Cassandra Vision 05 Objective : Provide a way to Visually Identify “Skew” 1. Extract cfstats, tablestats, and soon from “virtual” system tables. 2. Transform it to a format that can be importable/exportable 3. Transform it into an Excel that’s easy to use 4. Provide a Web UI 5. Cassandra “Agnostic”Tooling 1. Visualizing distributed systems is difficult. 2. Some information is good as a time based view, others need to be point in time. 3. Sometimes managing Cassandra is like a Business Intelligence / Data Analytics job. Objective : Provide /Support a standard Cassandra Monitoring Stack (Prometheus + Grafana ) 1. Document clearly how to do it manually. 2. Document how to do it w. Automation. 3. Provide ansible playbooks 4. Provide dockerized containers 5. Cassandra “Agnostic” Tooling https://blog.anant.us/resources-for-monitoring-datastax-cassandra-spark-solr-performance/
  23. 23. Common Cassandra Migration Patterns
  24. 24. Monolith to Microservices https://www.infoworld.com/article/3236291/how-to-choose-a-database-for-your-microservices.html
  25. 25. Lift and Shift 01 When it Works Great! 1. Partition Key is a GUID/UUID/TimeUUID 2. Partition Sizes are “Sane” because a Clustering Key is a Natural Key 3. Ton of Columns and Most of the are Null 4. Ton of Text / Blobs / JSON / XML 5. Aren’t using JOINS or Arbitrary Queries 6. Aren’t using that many “Views” that are basically JOINS or Arbitrary Queries When it Fails Bigly! 1. Lookup Tables because there are a set number of Records - no need for distribution. 2. When a Partition key is “Popular” 3. When a Time Series Data doesn’t come in Consistently 4. Any type of JOIN / Arbitrary Query as the primary Access Pattern Some things to REMEMBER 1. CQL is Similar to but is NOT SQL 2. You can’t Query what is not a Key or Indexed 3. Indexes / Materialized Views can Have Skew 4. Empty Columns are better than Null Columns 5. Since you Lifted/Shifted, Performance Test w/ Realistic Data
  26. 26. Denormalize / Consolidate 02 When it Works like Awesome Sauce! 1. Master Detail Objects: One to Many where the “One” Owns the “Many” 2. Hierarchical Objects (1-n-n) 3. Normalized Data is not Massive 4. Natural “Objects” that can be organized into Records (Row) and Folders (Partition) 5. Whole Sets of Small Lookup Tables can be put into a “Object_Reference_Table” 6. Object that need to store History or When it Fails Flat! 1. Normalized Data is Massive 2. The Standard Deviation of Partition Size is High 3. You need to Query on a non-Key attribute / sub-attribute. 4. Need to pull “reports” Some things to REMEMBER 1. Cassandra Stores Key/Values under the Hood 2. Number of Rows / Columns Don’t Matter as long as they are under 100-200MB 3. Since Consolidated Objects can become Big, Performance Test w/ Realistic Data
  27. 27. Microservices on Cassandra https://www.infoworld.com/article/3236291/how-to-choose-a-database-for-your-microservices.html
  28. 28. Read /Write Microservices 03 When it Works like a Champ! 1. Treat a Table/Keyspace/Data Center as a model for a Microservice (Domain) 2. Design your Models as if you were designing a REST API 3. Design your Models as if they were messages being sent in a Queue 4. When Microservices are not waiting on other Microservices (Non-Blocking) 5. Bunch of Writes and then a bunch of Reads When it Fails like a Champ! 1. You are trying to do too many things in one operation. (Lifted and Shifted a Monolith) 2. Instead of making 100 Queries you make 1 Query with 100 Keys in the “IN” Clause 3. Trying to do Many Read/Write/Read/Write Some things to REMEMBER 1. Cassandra itself is a set of 15-20 Threads that pass messages between each Pool and sometimes between nodes. 2. Do as many Writes as you want. 3. Reads should be 1 Partition / Query
  29. 29. CQRS
  30. 30. CQRS Microservices 04 When It’s “Web Scale” . Drop Microphone 1. All updates to data are “Events” with a Payload that are processed via Command Processors 2. Events are interpreted and can be used to update multiple copies of data as may be required. (Data Integrity) 3. Events can be sent to and sourced from Database, Cache, Queues, or directly from the Event source to the processor. 4. All Reads happen from “Query Tables” or “Report Tables” When it’s not “Web Scale” 1. Processors can’t seem to keep up and so your queries show stale information. 2. Too many events take down the Queue / Cache 3. Sending too much information as events. Think smaller. 4. Didn’t really segregate Command from Query Layer (Separate Scaling) Some things to REMEMBER 1. Same things in RW Microservices 2. If Commands are materializing Data in Different places, process those in separate threads asynchronously. 3. Scale the Query and Command processors as needed.
  31. 31. Cassandra Data Modeling Best Practices
  32. 32. Good Key Design 01 Some things to NOT DO. 1. Avoid using Integer/Long keys unless you couple it with another composite partition key. (Unless you can somehow show through realistic data generation that it won’t coalesce data in some nodes) 2. Avoid using Time/Date based keys or TimeUUID unless you know for damn sure that you are going to continuously create data at a given interval all day, every day. 3. Don’t just import relational data and expect it to magically work. Some things TO DO. 1. UUID will most likely work fine for any given table, but how do you find it again? You will need to have another table that has that information. 2. If you must use human readable keys, you can use a synthetic sharding mechanism. Next Slide. 3. Can combine known things and take a chance but should test with load: (String , Integer , String ,Integer) . Some things to REMEMBER 1. Clustering Keys don’t spread data around the cluster. 2. Remember ( Partition Key, Clustering Key ) are different ((Partition Key 1, Partition Key 2)) 3. Use Realistic Data: To properly scale Cassandra or any other System you need to create realistic data.
  33. 33. Spreading Data via Synthetic Sharding 01 Sometimes you need to use the key that you have which is human readable because that is the query path. How do deal with that? 1. Primary Key : ((CountryName, StateName, CityName, CompanyName)) 2. Integer Shard Added ((CountryName, StateName, CityName, CompanyName, ShardNumber)) 3. ShardNumber could be 1-10, or 1-100 depending on how badly your data is spreading. Let’s say you are using a time based key and notice coalescing around a particular time of day, you could consider the weekday itself as a part of the key . 1. Partition Key : (CreatedDate) 2. Week Day Number ((CreatedDate, WeekDay)) 3. WeekDay would be 0-6 mapped to Sunday-Saturday
  34. 34. Just say no to Tombstones! The reason tombstones exist is to make it possible to do insanely fast writes and updates and still be able to send the data back performantly. (Side conversation on Queues as Anti-pattern) 1. There is no need to set null values or delete data actively. 2. You can always do soft deletes or use TTL values that expire data automatically. 3. Watch out for prepared statements sending nulls. Avoiding Tombstones 01
  35. 35. Questions?
  36. 36. Resources Cassandra ● cassandra.link ● https://anant.github.io/awesome- cassandra ● https://www.sestevez.com/sestevez/cassandradatamo deler/ Microservices on Cassandra ● https://www.slideshare.net/JeffreyCarpenter/data- modeling-for-microservices-with-cassandra-and-spark Data Modeling Problems in Cassandra ● https://blog.anant.us/common-problems-cassandra- data-models/ Monitoring Cassandra / Spark ● https://blog.anant.us/resources-for-monitoring- datastax-cassandra-spark-solr-performance/
  37. 37. We’re Partnering / Hiring Platforms Datastax, Sitecore, Spark, Docker, Solr, Cassandra, Kafka, Elastic, AWS, Azure Frameworks React/Angular, TypeScript, ASP.NET, Node, Python

×