3. The Viber service
• Free, cross platform text messaging
• Free, cross platform VoIP calls
(voice and video)
• Photo, video and location sharing
• Stickers and Emoticons
• Group communication platform
(up to 100 participants)
• Push To Talk
5. Simplicity and User Experience
• No registration needed
• User ID = your mobile number
• Automatic friends detection
(no add a friend)
• Always on. No battery impact
• 32 languages
• Multiple devices experience:
Mobile, Tablets and Desktop
12. 2nd generation DB architecture advantages
• Got us through the first few years of extreme growth
• Never lost data from MongoDB
• Redis performance
13. 2nd generation DB architecture problems
• MongoDB performance
• MongoDB does not scale well with many application servers
• Redis – In-memory database with no sharding
• Redis Sharder – Not manageable and robust enough
14. 3rd generation DB architecture requirements
• High performance
• Large data sets
Solution:
• Scalable
• Robust
• Backed-up
• Always on
• Easy to monitor
• Prefer single DB solution
16. Migrating from 2nd to 3rd generation DB’s
• Migrate a live system
• Zero downtime
• No data loss
• Consistent data
17. How did we migrate?
• Stage 1: Add new CB cluster in parallel to existing cluster
Only delete keys from CB
• Stage 2: Read only from MongoDB
Write/Delete to both CB & MongoDB
• Stage 3: Background process that copies all data from
MongoDB to CB (if it doesn’t exist)
• Stage 4: Validate data (both DB’s should be identical)
• Stage 5: Read only from CB
Write/Delete from both CB & MongoDB
• Stage 6: Remove MongoDB and use only CB
19. Back-end servers
• Over 500 application servers
• 2nd generation DB architecture:
Increased
performance using
fewer DB servers!
• MongoDB – 1 cluster with 150 servers (master + 2 slaves)
• Redis – 3 clusters with a total of 144 servers (master + 1 slave)
• 3rd generation DB architecture:
• 7 Couchbase clusters (up to 60 nodes each)
• 1 – 2 replicas, XDCR & external backup
• Total of less than 200 Couchbase servers
20. Interesting facts
1. We have recently doubled our Couchbase instance sizes to
cope with increased usage
2. Total of ~1.5 million DB operations per second
3. Bigger clusters don’t necessarily do more ops
4. Highest performing clusters have 100% of their data in
memory
Viber was founded almost 4 years ago.
It started as a free app for iPhones providing free VoIP calls.
After a few months an Android version was released and text messaging was introduced.
Since then many new features were added and today Viber is a social communications platform available for almost all mobile phones, tablets and desktop OS’s.
A few months ago we were bought by Rakuten, the largest Japanese e-commerce company.
Viber provides reliable text messaging, giving you indications when a message was sent, delivered to the recipient and even when they were read.
Groups of up to 100 different users are possible supporting all media options such as sending photo’s, video’s, stickers, doodles and sending your location.
Recently we added a new Push To Talk feature which sends your voice as you are talking without waiting for the recording to finish.
In a group conversation with PTT, you can broadcast your voice instantly to up to 100 people.
In 2014 we have started to monetize the Viber service.
Viber out – VOIP calls from Viber to non-Viber phone number (land lines & mobile numbers) for very low rates
Stickers – Both free and premium stickers that can be purchased. In addition to branded content such as Smurfs and Garfield, we have created Viber characters such as Violet, Eve, Freddy, Blu, Zoe & more that you can see in the pictures here.
Viber is very easy to use. It uses your mobile number as your registration id and detects which of your friends have Viber from your AB.
In order to provide the best user experience, Viber clients are always on and connected to our servers allowing for sub-second updates.
We were able to provide this level of service without sacrificing battery life.
MD: Viber is primarily a mobile application, but we also have support for both desktop’s and tablets. All your devices are registered under the same phone number and are fully sync’ed between them. All messages & calls are received by all devices and if you read a message on one device it is automatically shown as read on the other. Sent messages from one device appear on all other devices instantly. Calls can be seamlessly transferred between the devices without the other side even noticing.
Next I would like to talk about what runs the Viber service.
The back-end which allows sending billions of messages and talking minutes with sub-second latencies to hundreds of millions of users.
At first Viber was a much smaller service, and for the first few months Viber used an in-house in-memory database solution.
As Viber usage grew exponentially, we had to move to a more scalable solution. We decided to use a sharded NoSQL database solution to provide fast implementation and very easy scaling. In early 2011 this technology was not even cutting edge technology, but bleeding edge technology. We initially ran on the beta version of MongoDB’s very version that started to support sharding. We were one of the first big MongoDB deployments back then (if not the biggest).
All Viber servers on AWS
Redis Sharder developed in house by Viber because Redis does not support sharding
Redis in Master/Slave configuration
MongoDB with 2 additional replicas for each node
MongoDB uses SSD based instances for active and 1st replica and EBS for 2nd replica
Redis used both as cache for MongoDB and stand-alone DB for either high-throughput activity or for very large datasets (Billions of keys)
Got us this far – 3 years of extreme growth
Never lost data from MongoDB – even though we had many server failures and even caused a few down times, but we were always able to access the data at the end of the day
Redis performance – Redis is a very fast DB and was able to give us the speed we needed
MongoDB performance:
Only provided tens of thousands of ops whereas we needed hundreds of thousands of ops
Performance of databases with billions of keys dropped significantly
MongoDB scale: each application server had many worker threads, all of which would connect to a single MongoDB cluster. MongoDB would manage each connection with a separate thread and stack wasting a lot of memory and CPU. When we reached hundreds of application servers this started to become a serious problem.
Redis Sharder – built in-house and is not a commercial-grade solution. Has VERY limited manageability and not robust enough. Scalability is limited and must be done in exponents of 2. Client implementation support most of the Redis commands, but not bulk commands, hindering performance.
Redis In-Memory DB – Redis is an in-memory DB which has limited persistence to disk, but because MongoDB could not perform fast enough, we use Redis for most of our DB operations, without MongoDB at all.
When looking for a 3rd generation DB architecture, we were not looking to replace a standard RDBMS based system with a NoSQL system like most companies. We were already using a NoSQL solution by one of the market leaders which was simply not working good enough.
High performance – hundreds of thousands of ops at consistent low latencies
Large data sets – Billions of keys
Scalable – Easy to add additional server nodes without interrupting production
Robust – Solution should be able to withstand node failures without any downtime. Can be persisted to disk with a varying amount of replica and backups for different data (each bucket/cluster will have different robustness settings)
Backed-up – Daily / weekly backups that can be used to perform a full recovery in case of failure
Always on – no downtime, including during SW/HW upgrades, backups, etc.
Easy to monitor – good monitoring solution which can show both live and historical statistics. Interface should be both graphical UI but also accessible via external interface to connect to our monitoring/alert system
Prefer single DB solution (instead of cache + persistent DB)
Several Couchbase clusters (up to 60 nodes each)
Each cluster has different access patterns (mainly read, mainly write/delete, large data sets, heavy disk usage) - all with SSD drives though for very fast access
Different replica settings for each bucket, depending on data requirements
We are currently using CB v2.5.1
All clusters are spread evenly across 3 AZ’s for redundancy
Backup Couchbase cluster
Sync. using XDCR for specific buckets
This cluster contains views for real-time data analytics
Can be used as alternative cluster in case of full failure of primary cluster
Daily / weekly backups from most of the CB clusters. Backup is compressed and uploaded to S3.
Migrate live system – We need to migrate the back-end databases while the system is receiving millions of new users, and hundreds of thousands of requests per seconds.
Zero downtime – The system must continue running throughout the whole migration process without even a minute of downtime
We must make sure no data is lost during the migration process
As data is constantly being updated we must make sure we are migrating the most up-to-date data and that data can be modified multiple times during the migration process
As we have hundreds of database servers, all in AWS, we need to take into consideration that during the migration process several machines will probably fail and should not affect data migration and consistency
Because of the complexity of this process, it was probably the most time consuming and delicate part of moving to Couchbase.
As the CB was divided into several clusters, we only introduced 1-2 new clusters at a time
Stage 1 – We need this stage to maintain data consistency because we have hundreds of application servers and upgrading them can take a few hours. When we move from stage 1 to stage 2 we need to make sure that if a server in stage 2 writes a key and then a server in stage 1 deletes it, it will not appear in CB.
Stage 2 – This stage will make sure that ongoing changes are written to CB
Stage 3 – We exported all data from MongoDB after all servers have been upgraded to stage 2. A background process reads all this data and inserts it into couchbase only if the key did not exist (if it existed it will always be newer).
Stage 4 – After background data migration is complete, both databases should be identical. To validate this we log all data import transactions and live updates. If there are any errors during the import we can always re-import the data. We also compare the list of keys from the MongoDB export to the logs of the actual keys inserted and make sure we didn’t miss anything. We also do a random check on a few tens of thousands of keys and compare the data between MongoDB and CB to check for inconsistencies. If there are any problems we can always start the migration process again.
Stage 5 – This stage is necessary just to maintain data consistency during server upgrade (stage 4 servers are still reading from MongoDB).
CB does not support nodes with different bucket sizes, so to increase size we had to replace all nodes with new bigger instances and only then increase bucket sizes. All this was done using the CB rebalance feature without interrupting normal operations
Total DB ops is about 1.5Mops, though it is not spread evenly across clusters. They range from 20K ops to 600K ops.
Cluster performing highest ops is only a 21 node cluster and the two lowest performing ops are actually the 2 biggest ones. The reason they are so large is because they hold much more data.
To achieve highest performance data should be in memory and not read from disk which impedes performance greatly
Daily oscillation between 100K to 350K ops
Over 2.5 billion keys using 2 replicas
This cluster is replicated to the backup cluster using XDCR. You can see that the latency for XDCR replication is about 1.5ms.
MongoDB support updating documents, but since CB is so fast, we are able to retrieve the document, update it and set it back (using CAS to verify it wasn’t changed) much faster than server side updating
Redis supports server-side data structures on a key level. In order to achieve similar functionality with CB, we used several solutions. To simulate sets and lists we used the append function which is an atomic operation and much faster than retrieving, updating and setting the key. The problem is that appending is not possible on valid JSON documents. We appended valid JSON objects with a delimiter between them and to remove an object add a minus and only specify the object key.To simulate large maps that we want to be able to retrieve a single object fast – but not put every object in a separate key because that would create very large metadata. The solution was to break a single map key down into several keys using our own hashing algorithm to know in which key a specific object is located. So instead of having 1 large value, would have 10 values – which would provide a good trade-off between speed and metadata size.
Initially we processed the daily backups which are stored in sqlite3 format to retrieve large range queries – such as a list of all Viber phone numbers in a certain country or that have certain data in their JSON object.We currently create views only on our backup cluster as to not impede performance. We plan to move more toward using views so we work with live data.