4. +
Snapshotting
{Persistence}
■
Data is taken as it exists and is written to the disk
■
Point-in-time copy of in-memory data
■
Backup & transfer to other server
■
Written to file in “dbfilename” stored in “dir”
■
Until next snapshot is taken, last snapshot can be lost if redis
crashes
6. +
Snapshotting
■
How often to perform an automatic snapshot
■
Accept writes on failure
■
Snapshot compression
■
What to name the snapshot on the disk
7. +
Append Only File (AOF)
{Persistence}
■
Copies incoming write commands as it happens
■
Records data changes at the end of the backup file
■
Data set could be recovered with replaying AOF
■
“append only yes”
■
“appendfsyc always”
■
Limited by disk performance
8. +
Append Only File (AOF)
■
Option to use AOF
■
Occurrence of sync writes to disk
■
Option to sync during AOF compaction
■
Occurrence of AOF compaction
9. +
Replication
■
Method where other servers receive an updated copy of the
data as its being written
■
Replicas can service read queries
■
Single master database sends writes out to multiple slave
databases
■
Set operations can take seconds to finish
10. +
Replication
■
Configuring for replication
On master, ensure that the path and filename are writable
by redis process
■ Enable slaving : slaveof host port
■ In a running system, redis can be stopped slaving or
connect to a different master
■ New / Transfer connection: slaveof host port
■ Stop data update: SLAVEOF no one
■
13. +
Replacing Failed Master
{Scenario and Solution}
■
What will we do in case of system failure?
■
Scenario
■
■
Machine A loses network connectivity
■
■
Machine A – Redis Master, Machine B – Redis Slave
Machine C has Redis, but no copy of data
Solution A
■
Make a fresh snapshot using Machine B using SAVE
■
Copy snapshot to Machine C
■
Start Redis on Machine C
■
Tell Machine B to be a slave of Machine C
15. +
Replacing Failed Master
{Scenario and Solution}
■
What will we do in case of system failure?
■
Solution B
■
Use Machine B (Slave) as Master
■
Create a new Slave (maybe Machine C)
■
Update client configuration to read/write to proper servers
■
(optional) update server configuration if restart is needed
16. +
Transactions
■
Begin transaction with MULTI
■
Execute commands with EXEC
■
Delayed execution with multi/exec can improve
performance
■
Holds off sending commands until all of them are known
■
When all of the commands are known, MULTI is sent by client
18. +
Reducing Memory Use
{Short Structures}
■
Method of reducing memory use
■
Ziplist – compact storage and unstructured representation of LISTs
HASHes and ZSETs
■
Intset – compact representation of SET
■
As structures grow beyond limits, they are converted back to their
original data structure type
■
Manipulating compact versions can become slow as they grow
19. +
Ziplist
■
Basic configuration for the 3 data types are similar
■
*-max-ziplist-value – max number of items to be encoded as ziplist
■
If limits are exceeded, redis will convert the list/hash/zset into non-ziplist
structure
22. +
Sharded Structures
■
Sharding – takes data, partitions it to smaller pieces and
sends data to different locations depending on which
partition the data is assigned to
■
Sharding LISTs – uses LUA scripting
■
Sharding ZSETs – zset operations on shards violate how
quickly zsets perform, sharding is not useful on zsets
23. +
Sharded Structures
■
Sharding HASHes
■
Method of partitioning data must be chosen
■
Hash’s keys can be used as source of info for sharding
■
To partition keys:
■
Calculate hash function on the key
■
Calculate number of shards needed depending on number of keys
we want to fit in one shard and the total number of keys
■
Resulting number of shards along with hash value will be used to
find out which shard we’ll use
24. +
Scaling
{read capacity}
■
In using small structures, make sure max ziplist is not too
large
■
Use structures that offer good performance for the types of
queries we want to perform
■
Compress large data sent to redis for caching to reduce
network reads and writes
■
Use pipelining and connection pooling
25. +
Scaling
{read capacity}
■
Increase total read throughput using read only slave servers
■
■
■
Always remember to WRITE TO THE MASTER
Writing on SLAVE will cause an error
Redis Sentinel
■
Mode where redis server binary doesn’t act like the typical one
■
Watches behavior and health of master(s) and slave(s)
■
Intended to offer automated failover
26. +
Scaling
{memory capacity}
■
Make sure to check all methods to reduce read data volume
■
Make sure larger pieces of unrelated functionality are moved to
different servers
■
Aggregate writes in local memory before writing to redis
■
Consider using locks or LUA when limitations such as
watch/multi/exec are encountered
■
When using AOF, keep in mind that the disk needs to keep up
with the volume we’re writing
28. +
Scaling
{complex queries}
■
Scenario : machines have enough memory to hold index, but we need to
execute more queries that server can handle
■
Use : SUNIONSTORE, SINTERSTORE, SDIFFSTORE, ZINTERSTORE,
and/or ZUNIONSTORE
■
Since we “read” from slave, set : slave-read-only no