Introduction to Redis 3.0, and it’s features and improvements. What’s difference between Redis / Memcached / Aerospike ? The strong sides of Redis, and away from the weak sides.
本議程介紹 Redis 3.0 及其歷史,探討 Redis 的特性與改進。並一併分析 Redis / Memcached / Aerospike 三者之間的差異,有助於未來面對業務場景需求提供瞭解與判斷。最後,分享 Redis 適用之場景,及其不適用場景下的備案或整合方案。議程適於 Redis 初學者、對 Redis 想深入瞭解者,及曾經莫名被 Redis 雷擊或坑殺者。
This document discusses Redis management, high availability, and cell architecture. It covers Redis features like being single-threaded, in-memory, and using collections. It compares RDB and AOF persistence methods and explains Redis replication. Redis Sentinel is described as providing Redis high availability by monitoring masters, promoting slaves, and configuring failovers. Twemproxy is presented as a load balancer for Redis clusters. Cell architecture is proposed as sharding user data by cell for scalability and fault tolerance.
This document provides troubleshooting tips for Redis. It discusses that Redis is single-threaded and can slow down if long commands are processed. It recommends using the latest stable version and checking for memory fragmentation issues. For replication, it suggests configuring health checks and resynchronizing slaves after a master restart. Troubleshooting tips include checking configuration options like stop-writes-on-bgsave-error and increasing client output buffer limits for large datasets. The document stresses that Redis security is weak and port access should be limited to private networks only.
Redis Tips discusses various tips for using Redis including dangerous commands to avoid such as KEYS and FLUSHALL. It describes how FLUSHALL works differently in Redis compared to Memcache and can be slow. It also covers Redis memory policies, replication, RDB snapshots, and approaches to sharding or clustering Redis including using a client library or server proxy. Sentinel is introduced as a failover solution for Redis masters and slaves.
This document provides an overview of Redis, including:
- Redis is an in-memory database that supports various data types and persistence. It can function as a cache but is not solely a cache.
- Redis has very fast performance and supports features like expiration, different data types (strings, hashes, lists, sets, sorted sets), replication, and sharding.
- The document discusses Redis use cases, installation, benchmarking results, commands, and provides examples of how Redis could be used for tasks like tracking page views, popular news lists, and real-time gaming rankings.
Practical advices how to achieve persistence in Redis. Detailed overview of all cons and pros of RDB snapshots and AOF logging. Tips and tricks for proper persistence configuration with Redis pools and master/slave replication.
This document provides instructions on how to use Redis, an open source in-memory NoSQL database. Redis can be used when fast responses are needed or for tasks like session storage, job queues, and real-time rankings. It has advantages like speed and atomic operations but disadvantages like memory overhead. The document explains how to install, run, and test Redis, and demonstrates several of its features including lists, sets, hashes, and publish/subscribe. It also provides recommendations on clustering and shows the results of a performance test.
Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)Ontico
- Что такое SDS (общие места для (почти) всех решений — масштабирование, абстрагирование от аппаратных ресурсов, управление с помощью политик, кластерные ФС);
- Почему мы решили использовать SDS (нужно было объектное хранилище);
- Почему решили использовать именно Ceph, а не другие открытые (GlusterFS, Swift...) или проприетарные (IBM Elastic Storage, Huawei OceanStor) решения;
- Что еще умеет Ceph, кроме object storage (RBD, CephFS);
- Как работает Ceph (со стороны сервера);
- Что нового дает BlueStore по сравнению с классическим (поверх ФС);
- Сравнение производительности (метрики тестов);
- BlueStore — все еще tech preview;
- Заключение. Ссылки, литература.
Query Optimization with MySQL 8.0 and MariaDB 10.3: The BasicsJaime Crespo
Query optimization tutorial for Beginners using MySQL 8.0 and MariaDB 10.3 presented at the Open Source Database Percona Live Europe 2018 organized in Frankfurt. The source can be found and errors can be reported at https://github.com/jynus/query-optimization
Material URL moved to: http://jynus.com/dbahire/pleu18
This document discusses Redis management, high availability, and cell architecture. It covers Redis features like being single-threaded, in-memory, and using collections. It compares RDB and AOF persistence methods and explains Redis replication. Redis Sentinel is described as providing Redis high availability by monitoring masters, promoting slaves, and configuring failovers. Twemproxy is presented as a load balancer for Redis clusters. Cell architecture is proposed as sharding user data by cell for scalability and fault tolerance.
This document provides troubleshooting tips for Redis. It discusses that Redis is single-threaded and can slow down if long commands are processed. It recommends using the latest stable version and checking for memory fragmentation issues. For replication, it suggests configuring health checks and resynchronizing slaves after a master restart. Troubleshooting tips include checking configuration options like stop-writes-on-bgsave-error and increasing client output buffer limits for large datasets. The document stresses that Redis security is weak and port access should be limited to private networks only.
Redis Tips discusses various tips for using Redis including dangerous commands to avoid such as KEYS and FLUSHALL. It describes how FLUSHALL works differently in Redis compared to Memcache and can be slow. It also covers Redis memory policies, replication, RDB snapshots, and approaches to sharding or clustering Redis including using a client library or server proxy. Sentinel is introduced as a failover solution for Redis masters and slaves.
This document provides an overview of Redis, including:
- Redis is an in-memory database that supports various data types and persistence. It can function as a cache but is not solely a cache.
- Redis has very fast performance and supports features like expiration, different data types (strings, hashes, lists, sets, sorted sets), replication, and sharding.
- The document discusses Redis use cases, installation, benchmarking results, commands, and provides examples of how Redis could be used for tasks like tracking page views, popular news lists, and real-time gaming rankings.
Practical advices how to achieve persistence in Redis. Detailed overview of all cons and pros of RDB snapshots and AOF logging. Tips and tricks for proper persistence configuration with Redis pools and master/slave replication.
This document provides instructions on how to use Redis, an open source in-memory NoSQL database. Redis can be used when fast responses are needed or for tasks like session storage, job queues, and real-time rankings. It has advantages like speed and atomic operations but disadvantages like memory overhead. The document explains how to install, run, and test Redis, and demonstrates several of its features including lists, sets, hashes, and publish/subscribe. It also provides recommendations on clustering and shows the results of a performance test.
Ceph BlueStore - новый тип хранилища в Ceph / Максим Воронцов, (Redsys)Ontico
- Что такое SDS (общие места для (почти) всех решений — масштабирование, абстрагирование от аппаратных ресурсов, управление с помощью политик, кластерные ФС);
- Почему мы решили использовать SDS (нужно было объектное хранилище);
- Почему решили использовать именно Ceph, а не другие открытые (GlusterFS, Swift...) или проприетарные (IBM Elastic Storage, Huawei OceanStor) решения;
- Что еще умеет Ceph, кроме object storage (RBD, CephFS);
- Как работает Ceph (со стороны сервера);
- Что нового дает BlueStore по сравнению с классическим (поверх ФС);
- Сравнение производительности (метрики тестов);
- BlueStore — все еще tech preview;
- Заключение. Ссылки, литература.
Query Optimization with MySQL 8.0 and MariaDB 10.3: The BasicsJaime Crespo
Query optimization tutorial for Beginners using MySQL 8.0 and MariaDB 10.3 presented at the Open Source Database Percona Live Europe 2018 organized in Frankfurt. The source can be found and errors can be reported at https://github.com/jynus/query-optimization
Material URL moved to: http://jynus.com/dbahire/pleu18
This document summarizes Redis versions from 2.8 to 3.2 and discusses new features and improvements. Redis 2.8 introduced the SCAN command to iteratively scan keys and partial sync replication to avoid full resynchronization if connections are briefly lost. Redis 3.0 added Redis Cluster for automatic sharding and diskless replication for faster replication without disk I/O. Redis 3.2 optimized string header sizes in SDS to reduce memory usage and added new GEO commands.
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
Cisco Cloud Services provides an OpenStack platform to Cisco SaaS applications using a worldwide deployment of Ceph clusters storing petabytes of data. The initial Ceph cluster design experienced major stability problems as the cluster grew past 50% capacity. Strategies were implemented to improve stability including client IO throttling, backfill and recovery throttling, upgrading Ceph versions, adding NVMe journals, moving the MON levelDB to SSDs, rebalancing the cluster, and proactively detecting slow disks. Lessons learned included the importance of devops practices, sharing knowledge, rigorous testing, and balancing performance, cost and time.
Redis is an open source, in-memory data structure store that can be used as a database, cache, or message broker. It supports basic data types like strings, hashes, lists, and sets. Redis features high performance, replication, publishing/subscribing, and Lua scripting. It is widely adopted by companies like GitHub, StackOverflow, and Blizzard for use cases like caching, sessions, queues, and as a real-time database.
Suse Enterprise Storage 3 provides iSCSI access to connect to ceph storage remotely over TCP/IP, allowing clients to access ceph storage using the iSCSI protocol. The iSCSI target driver in SES3 provides access to RADOS block devices. This allows any iSCSI initiator to connect to SES3 over the network. SES3 also includes optimizations for iSCSI gateways like offloading operations to object storage devices to reduce locking on gateway nodes.
This document compares the caching technologies Memcached and Redis. It provides an overview of how caching works and the problems that can occur with caching like cache misses, stale data, and warm-up times. It details the features of Memcached and Redis, including their data structures and operations. Benchmarks are presented comparing the performance of Memcached and Redis for set and get operations with varying numbers of servers and clients. Redis performance degrades under heavy load due to its single-threaded architecture while Memcached scales better. The document concludes more benchmarks are needed to fully evaluate Redis.
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Redis is a key-value store that can be used as a database, cache, and message broker. It supports basic data structures like strings, hashes, lists, sets, sorted sets with operations that are fast thanks to storing the entire dataset in memory. Redis also provides features like replication, transactions, pub/sub messaging and can be used for caching, queueing, statistics and inter-process communication.
Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...Ontico
В докладе я расскажу, что такое Web-акселератор, он же reverse proxy и он же - фронтенд. Как следует из названия, он ускоряет сайт. Но за счет чего он это делает? Какие они, вообще, бывают? Что они умеют, а что нет? В чем особенности каждого из решений? И, вообще, постараюсь рассказать о них вглубь и вширь.
Еще я расскажу про еще один Open Source Web-акселератор - Tempesta FW. Уникальность проекта в том, что это гибрид Web-акселератора и файервола, разрабатываемый специально для обработки и фильтрации больших объемов HTTP трафика. Основные сценарии использования системы — это защита от DDoS прикладного уровня и просто доставка больших объемов HTTP трафика малыми затратами на оборудование.
- Что такое Web-акселератор, зачем он был придуман и как понять когда он нужен;
- Типичный функционал reverse proxy, его отличия от Web-сервера;
- Упомянем про SSL акселераторы;
- Заглянем вглубь HTTP, и как он управляет кэшированием и проксированием, что может быть закэшированно, а что - нет;
- Мы сравним наиболее популярные акселераторы (Nginx, Varnish, Apache Traffic Server, Apache HTTPD, Squid) по фичам и внутренностям;
- Зачем нужен еще один Web-акселератор Tempesta FW, и в чем его отличие от других акселераторов.
This document summarizes a presentation about software defined storage using the open source Gluster file system. It begins with an overview of storage concepts like reliability, performance, and scaling. It then discusses the history and types of storage and provides case studies of proprietary storage systems. The presentation introduces software defined storage and Gluster, describing its modular design, use in cloud computing, pros and cons. Key Gluster concepts are defined and its distributed and replicated volume types are explained. The presentation concludes with instructions for setting up and using Gluster.
This is to introduce the related components in SUSE Linux Enterprise High Availability Extension product to build High Available Storage (ha-lvm/drbd/iscsi/nfs, clvm, ocfs2, cluster-raid1).
Trying and evaluating the new features of GlusterFS 3.5Keisuke Takahashi
My presentation in LinuxCon/CloudOpen Japan 2014.
It has passed few days since GlusterFS 3.5 released so feel free to correct me if you find my mistakes or misunderstandings. Thanks.
1. Redis Sentinel provides high availability for Redis databases by monitoring Redis servers, detecting failures, and initiating failovers to slave servers.
2. When a failure is detected, Sentinel will promote a slave to become the new master, redirect clients to the new master, and reconfigure other slaves to connect to the new master.
3. While Sentinel provides basic high availability, it has some limitations such as not being able to promote a slave if the original master also becomes a slave, and not being able to handle Redis servers that are loading data during startup.
The document discusses performance analysis of Ceph storage clusters. It begins by providing context on SUSE Enterprise Storage 5 and why performance analysis is important. It then describes how to analyze performance using tools like Ceph commands, FIO, LTTNG, and Iperf. Example results are shown from testing network performance, disk performance, and cluster-level benchmarks on an HPE Apollo storage cluster. Integration with Salt is also discussed for automating performance testing across a Ceph cluster.
A webinar that looks into the new features that the Windows Server 2016 will offer in the DNS, DHCP and IPv6 space.
Showcase of some of the new stuff using the latest tech preview and the aim is to give administrators a quick overview of the Windows Server 2016 and enough information to decide if early adoption is worthwhile.
This document summarizes Marian Marinov's testing and experience with different distributed filesystems at his company SiteGround. He tested CephFS, GlusterFS, MooseFS, OrangeFS, and BeeGFS. CephFS required a lot of resources but lacked redundancy. GlusterFS was relatively easy to set up but had high CPU usage. MooseFS and OrangeFS were also easy to set up. Ultimately, they settled on Ceph RBD with NFS and caching for performance and simplicity. File creation performance tests showed MooseFS and NFS+Ceph RBD outperformed OrangeFS and GlusterFS. Tuning settings like MTU, congestion control, and caching helped optimize performance.
This document provides an overview and planning guidelines for a first Ceph cluster. It discusses Ceph's object, block, and file storage capabilities and how it integrates with OpenStack. Hardware sizing examples are given for a 1 petabyte storage cluster with 500 VMs requiring 100 IOPS each. Specific lessons learned are also outlined, such as realistic IOPS expectations from HDD and SSD backends, recommended CPU and RAM per OSD, and best practices around networking and deployment.
This document summarizes BlueStore, a new storage backend for Ceph that provides faster performance compared to the existing FileStore backend. BlueStore manages metadata and data separately, with metadata stored in a key-value database (RocksDB) and data written directly to block devices. This avoids issues with POSIX filesystem transactions and enables more efficient features like checksumming, compression, and cloning. BlueStore addresses consistency and performance problems that arose with previous approaches like FileStore and NewStore.
Pacemaker is a high availability cluster resource manager that can be used to provide high availability for MySQL databases. It monitors MySQL instances and replicates data between nodes using replication. If the primary MySQL node fails, Pacemaker detects the failure and fails over to the secondary node, bringing the MySQL service back online without downtime. Pacemaker manages shared storage and virtual IP failover to ensure connections are direct to the active MySQL node. It is important to monitor replication state and lag to ensure data consistency between nodes.
This document summarizes Redis versions from 2.8 to 3.2 and discusses new features and improvements. Redis 2.8 introduced the SCAN command to iteratively scan keys and partial sync replication to avoid full resynchronization if connections are briefly lost. Redis 3.0 added Redis Cluster for automatic sharding and diskless replication for faster replication without disk I/O. Redis 3.2 optimized string header sizes in SDS to reduce memory usage and added new GEO commands.
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
Cisco Cloud Services provides an OpenStack platform to Cisco SaaS applications using a worldwide deployment of Ceph clusters storing petabytes of data. The initial Ceph cluster design experienced major stability problems as the cluster grew past 50% capacity. Strategies were implemented to improve stability including client IO throttling, backfill and recovery throttling, upgrading Ceph versions, adding NVMe journals, moving the MON levelDB to SSDs, rebalancing the cluster, and proactively detecting slow disks. Lessons learned included the importance of devops practices, sharing knowledge, rigorous testing, and balancing performance, cost and time.
Redis is an open source, in-memory data structure store that can be used as a database, cache, or message broker. It supports basic data types like strings, hashes, lists, and sets. Redis features high performance, replication, publishing/subscribing, and Lua scripting. It is widely adopted by companies like GitHub, StackOverflow, and Blizzard for use cases like caching, sessions, queues, and as a real-time database.
Suse Enterprise Storage 3 provides iSCSI access to connect to ceph storage remotely over TCP/IP, allowing clients to access ceph storage using the iSCSI protocol. The iSCSI target driver in SES3 provides access to RADOS block devices. This allows any iSCSI initiator to connect to SES3 over the network. SES3 also includes optimizations for iSCSI gateways like offloading operations to object storage devices to reduce locking on gateway nodes.
This document compares the caching technologies Memcached and Redis. It provides an overview of how caching works and the problems that can occur with caching like cache misses, stale data, and warm-up times. It details the features of Memcached and Redis, including their data structures and operations. Benchmarks are presented comparing the performance of Memcached and Redis for set and get operations with varying numbers of servers and clients. Redis performance degrades under heavy load due to its single-threaded architecture while Memcached scales better. The document concludes more benchmarks are needed to fully evaluate Redis.
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Redis is a key-value store that can be used as a database, cache, and message broker. It supports basic data structures like strings, hashes, lists, sets, sorted sets with operations that are fast thanks to storing the entire dataset in memory. Redis also provides features like replication, transactions, pub/sub messaging and can be used for caching, queueing, statistics and inter-process communication.
Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...Ontico
В докладе я расскажу, что такое Web-акселератор, он же reverse proxy и он же - фронтенд. Как следует из названия, он ускоряет сайт. Но за счет чего он это делает? Какие они, вообще, бывают? Что они умеют, а что нет? В чем особенности каждого из решений? И, вообще, постараюсь рассказать о них вглубь и вширь.
Еще я расскажу про еще один Open Source Web-акселератор - Tempesta FW. Уникальность проекта в том, что это гибрид Web-акселератора и файервола, разрабатываемый специально для обработки и фильтрации больших объемов HTTP трафика. Основные сценарии использования системы — это защита от DDoS прикладного уровня и просто доставка больших объемов HTTP трафика малыми затратами на оборудование.
- Что такое Web-акселератор, зачем он был придуман и как понять когда он нужен;
- Типичный функционал reverse proxy, его отличия от Web-сервера;
- Упомянем про SSL акселераторы;
- Заглянем вглубь HTTP, и как он управляет кэшированием и проксированием, что может быть закэшированно, а что - нет;
- Мы сравним наиболее популярные акселераторы (Nginx, Varnish, Apache Traffic Server, Apache HTTPD, Squid) по фичам и внутренностям;
- Зачем нужен еще один Web-акселератор Tempesta FW, и в чем его отличие от других акселераторов.
This document summarizes a presentation about software defined storage using the open source Gluster file system. It begins with an overview of storage concepts like reliability, performance, and scaling. It then discusses the history and types of storage and provides case studies of proprietary storage systems. The presentation introduces software defined storage and Gluster, describing its modular design, use in cloud computing, pros and cons. Key Gluster concepts are defined and its distributed and replicated volume types are explained. The presentation concludes with instructions for setting up and using Gluster.
This is to introduce the related components in SUSE Linux Enterprise High Availability Extension product to build High Available Storage (ha-lvm/drbd/iscsi/nfs, clvm, ocfs2, cluster-raid1).
Trying and evaluating the new features of GlusterFS 3.5Keisuke Takahashi
My presentation in LinuxCon/CloudOpen Japan 2014.
It has passed few days since GlusterFS 3.5 released so feel free to correct me if you find my mistakes or misunderstandings. Thanks.
1. Redis Sentinel provides high availability for Redis databases by monitoring Redis servers, detecting failures, and initiating failovers to slave servers.
2. When a failure is detected, Sentinel will promote a slave to become the new master, redirect clients to the new master, and reconfigure other slaves to connect to the new master.
3. While Sentinel provides basic high availability, it has some limitations such as not being able to promote a slave if the original master also becomes a slave, and not being able to handle Redis servers that are loading data during startup.
The document discusses performance analysis of Ceph storage clusters. It begins by providing context on SUSE Enterprise Storage 5 and why performance analysis is important. It then describes how to analyze performance using tools like Ceph commands, FIO, LTTNG, and Iperf. Example results are shown from testing network performance, disk performance, and cluster-level benchmarks on an HPE Apollo storage cluster. Integration with Salt is also discussed for automating performance testing across a Ceph cluster.
A webinar that looks into the new features that the Windows Server 2016 will offer in the DNS, DHCP and IPv6 space.
Showcase of some of the new stuff using the latest tech preview and the aim is to give administrators a quick overview of the Windows Server 2016 and enough information to decide if early adoption is worthwhile.
This document summarizes Marian Marinov's testing and experience with different distributed filesystems at his company SiteGround. He tested CephFS, GlusterFS, MooseFS, OrangeFS, and BeeGFS. CephFS required a lot of resources but lacked redundancy. GlusterFS was relatively easy to set up but had high CPU usage. MooseFS and OrangeFS were also easy to set up. Ultimately, they settled on Ceph RBD with NFS and caching for performance and simplicity. File creation performance tests showed MooseFS and NFS+Ceph RBD outperformed OrangeFS and GlusterFS. Tuning settings like MTU, congestion control, and caching helped optimize performance.
This document provides an overview and planning guidelines for a first Ceph cluster. It discusses Ceph's object, block, and file storage capabilities and how it integrates with OpenStack. Hardware sizing examples are given for a 1 petabyte storage cluster with 500 VMs requiring 100 IOPS each. Specific lessons learned are also outlined, such as realistic IOPS expectations from HDD and SSD backends, recommended CPU and RAM per OSD, and best practices around networking and deployment.
This document summarizes BlueStore, a new storage backend for Ceph that provides faster performance compared to the existing FileStore backend. BlueStore manages metadata and data separately, with metadata stored in a key-value database (RocksDB) and data written directly to block devices. This avoids issues with POSIX filesystem transactions and enables more efficient features like checksumming, compression, and cloning. BlueStore addresses consistency and performance problems that arose with previous approaches like FileStore and NewStore.
Pacemaker is a high availability cluster resource manager that can be used to provide high availability for MySQL databases. It monitors MySQL instances and replicates data between nodes using replication. If the primary MySQL node fails, Pacemaker detects the failure and fails over to the secondary node, bringing the MySQL service back online without downtime. Pacemaker manages shared storage and virtual IP failover to ensure connections are direct to the active MySQL node. It is important to monitor replication state and lag to ensure data consistency between nodes.
Classical music is facing challenges in attracting new audiences and maintaining financial viability. However, advocates argue that classical music can remain relevant by embracing new technologies, innovative programming, and diverse forms of expression. Connecting with younger audiences through social media and collaborations with popular artists may help classical music thrive in the digital age.
The document discusses the history and evolution of the English language from its origins as Anglo-Frisian dialects brought to Britain by Anglo-Saxon settlers in the 5th century AD. Over time, the language was influenced by Old Norse during the Viking invasions and later by Norman French following the Norman conquest of 1066, gaining vocabulary from both. Modern English began emerging in the late 15th century after the invention of the printing press allowed written English to become more standardized.
Redis High availability and fault tolerance in a multitenant environmentIccha Sethi
This document discusses running Redis in a multi-tenant environment. It covers the architecture of using Redis with high availability, including a master-slave setup with load balancing and failover. It also discusses security and isolation techniques when using Redis for multiple customers, such as access control lists and SSL. Finally, it emphasizes the importance of monitoring all aspects of the Redis infrastructure and environment.
The document provides an introduction to Redis, describing it as an open source, advanced key-value store that can be used as a data structure server with features like strings, hashes, lists, sets and sorted sets; it also gives an overview of installing and starting Redis, and provides examples of basic usage of Redis strings, lists, sets and hashes.
This document discusses troubleshooting Redis. Some key points:
- Redis is single-threaded, so commands like KEYS, FlushAll, and deleting large collections can be slow. It's better to use SCAN instead of KEYS.
- Creating Redis database snapshots (RDB files) and rewriting the append-only file (AOF) can cause high disk I/O and CPU usage. It's best to disable automatic rewrites.
- Monitoring memory usage and fragmentation is important to avoid performance issues. The maxmemory setting also needs monitoring to prevent out-of-memory errors.
- Network and replication failures need solutions like DNS failover or using Zookeeper for coordination to maintain high availability of Redis
10 Ways to Scale with Redis - LA Redis Meetup 2019Dave Nielsen
Redis has 10 different data structures (String, Hash, List, Set, Sorted Set, Bit Array, Bit Field, Hyperloglog, Geospatial Index, Streams) plus Pub/Sub and many Redis Modules. In this talk, Dave will give 10 examples of how to use these data structures to scale your website. I will start with the basics, such as a cache and User session management. Then I demonstrate user generated tags, leaderboards and counting things with hyberloglog. I will with a demo of Redis Pub/Sub vs Redis Streams which can be used to scale your Microservices-based architecture.
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
Quick & Easy Deployment of a Ceph Storage Cluster with SUSE Enterprise Storage
The document discusses deploying a Ceph storage cluster using SUSE Enterprise Storage. It begins with an introduction to Ceph and how it works as a distributed object storage system. It then covers designing Ceph clusters based on workload needs and measuring performance. The document concludes with step-by-step instructions for deploying a basic three node Ceph cluster with monitoring using SUSE Enterprise Storage.
What's new with enterprise Redis - Leena Joshi, Redis LabsRedis Labs
Redis Labs manages over 160k+ HA databases, 10k clustered databases, without data loss in spite of one node failure a day and one data center outage per month. Using Enterprise
Redis(RLEC), Redis Labs delivers seamless zero downtime scaling, true high availability with persistence, cross-rack/zone/
datacenter replication and instant automatic failover. Learn how. Join this session for a deep dive into how enterprise Redis makes for no-hassle Redis deployments and the roadmap for new Redis capabilities. Discover new cost savings with Redis on Flash for cost-effective high performance operations and analytics
Day 2 General Session Presentations RedisConfRedis Labs
The document discusses new memory technologies like persistent memory and their implications. It provides latency and bandwidth numbers for different memory types and notes that heterogeneous memory systems using tiers of DRAM and NVM provide opportunities for better performance and cost. Examples are given of key-value stores and databases leveraging NVM to achieve high performance while reducing costs. The talk also discusses how new distributed data structures like CRDTs could be used across servers with shared memory.
Redis vs. MongoDB: Comparing In-Memory Databases with Percona Memory EngineScaleGrid.io
In this presentation, we’re comparing two of the most popular NoSQL databases: Redis (in-memory) and MongoDB (Percona memory storage engine).
Redis is a popular and very fast in-memory database structure store primarily used as a cache or a message broker. Being in-memory, it’s the data store of choice when response times trump everything else.
MongoDB is an on-disk document store that provides a JSON interface to data and has a very rich query language. Known for its speed, efficiency, and scalability, it’s currently the most popular NoSQL database used today. However, being an on-disk database, it can’t compare favorably to an in-memory database like Redis in terms of absolute performance. But, with the availability of the in-memory storage engines for MongoDB, a more direct comparison becomes feasible.
Read the full post on the ScaleGrid blog: https://scalegrid.io/blog/comparing-in-memory-databases-redis-vs-mongodb-percona-memory-engine/
RedisConf18 - My Other Car is a Redis ClusterRedis Labs
This document discusses using Redis as both a cache and primary data store. Redis is described as fast, simple, and easily scalable. It can be used as a cache for things like user profiles, with hot data stored in Redis and cold data stored elsewhere, like LMDB. Redis is also used as a primary store for tracking metrics like pageviews and purchases. The document provides examples of storing hyperloglog data in Redis to track unique counts and expiries. It also discusses techniques for load balancing and aggregating Redis data.
Container Storage Best Practices in 2017Keith Resar
Docker Storage Drivers are a rapidly moving target. Considering the addition of new graphdrivers and continued maturing of the existing set, we evaluate how each works, performance implications from their implementation architecture, and ideal use cases for each.
Redis provides better performance than Memcached as a cache backend for Magento. Testing showed Redis handled 50,000 cache records in 6 hours compared to Memcached handling over 10 million in the same time. Full page cache performance tests found Redis was 17-20% faster than Memcached. While Redis has some limitations, its support for tagging, replication, and larger object sizes make it a more reliable and scalable alternative to Memcached for Magento caching.
Redis is an advanced key-value NoSQL data store that is similar to memcached but with additional data types like lists, sets, and ordered sets. It was created in 2009 by Salvatore Sanfilippo to provide better performance than MySQL for real-time analytics. Major companies like Twitter, GitHub, Pinterest, and Snapchat use Redis to store user profiles, timelines, and other frequently accessed data due to its speed. The Redis plugin for Grails provides methods to cache data and integrate Redis as a data store or for sessions.
How you can benefit from using Redis - RamirezCodemotion
The document discusses how Redis can be used and its benefits. Redis is an open source, BSD licensed key-value store that can be used as an advanced data structure server since keys can contain strings, hashes, lists, sets and sorted sets. It describes how Redis is very fast, useful for caching, and commonly used by large companies like Twitter, Pinterest, Wikipedia and others to power their infrastructure. Examples of how Redis is used include storing user timelines and profiles, caching query results, and as a message broker for pub/sub features.
The document discusses the NoSQL ecosystem. It provides a brief history of NoSQL databases from the late 1990s to today. It then lists and categorizes the major NoSQL databases. The rest of the document discusses interesting properties of NoSQL databases like data models, query models, transactions, and consistency. It also provides examples of real-world usage at companies like Netflix, Facebook, and Craigslist. Key takeaways are around developer accessibility, reuse of NoSQL components, and using the right tool for the job (polyglot persistence).
The document provides an overview of Redis, including:
- Redis is an in-memory database that supports data structures like strings, lists, sets, and hashes. It is often used for caching, messaging, and building real-time applications.
- Major companies like Twitter, GitHub, and Pinterest use Redis for its speed and support for complex data types.
- Redis can be deployed in standalone, master-slave, or cluster topologies to provide redundancy, scaling, and automatic failover. Persistence to disk can be configured using snapshots or append-only files.
- Redis offers advantages over other databases and caching solutions in terms of performance, data types, scalability, and availability. It has a simple
10 Ways to Scale Your Website Silicon Valley Code Camp 2019Dave Nielsen
Redis has 10 different data structures (String, Hash, List, Set, Sorted Set, Bit Array, Bit Field, Hyperloglog, Geospatial Index, Streams) plus Pub/Sub and many Redis Modules. In this talk, Dave will give 10 examples of how to use these data structures to scale your website. I will start with the basics, such as a cache and User session management. Then I demonstrate user generated tags, leaderboards and counting things with hyberloglog. I will with a demo of Redis Pub/Sub vs Redis Streams which can be used to scale your Microservices-based architecture.
Developing polyglot persistence applications (SpringOne India 2012)Chris Richardson
This document discusses using polyglot persistence, which involves using multiple data storage technologies together to address different application needs. It describes using Redis as a cache to improve performance for a food delivery application. The document proposes building "materialized views" in Redis by denormalizing and indexing data to optimize queries for finding available restaurants. It outlines synchronizing data between the MySQL database and Redis cache to maintain consistency.
Peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns of our Memcached and Redis offerings and how customers have used them for in-memory operations and achieved improved latency and throughput for applications. During this session, we review best practices, design patterns, and anti-patterns related to Amazon ElastiCache.
Testing in Production, Deploy on FridaysYi-Feng Tzeng
本議題是去年 ModernWeb'19 「Progressive Deployment & NoDeploy」的延伸。雖然已提倡 Testing in Production 多年,但至今願意或敢於實踐的團隊並不多,背後原因多是與文化及態度有些關係。
此次主要分享推廣過程中遇到的苦與甜,以及自己親力操刀幾項達成 Testing in Production, Deploy on Fridays 成就的產品。
This document discusses timing attacks against web applications. It begins by referencing a previous conference presentation on timing attacks and front-end performance vulnerabilities. It then demonstrates how subtle differences in response times can reveal privileged information, like whether a username is valid. The document advocates adding random delays to responses to mitigate these timing attack vectors. It provides several examples of timing attacks in practice and potential mitigation techniques to obscure timing patterns and prevent secret information leakage.
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
Stork Product Overview: An AI-Powered Autonomous Delivery FleetVince Scalabrino
Imagine a world where instead of blue and brown trucks dropping parcels on our porches, a buzzing drove of drones delivered our goods. Now imagine those drones are controlled by 3 purpose-built AI designed to ensure all packages were delivered as quickly and as economically as possible That's what Stork is all about.
A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
Building API data products on top of your real-time data infrastructureconfluent
This talk and live demonstration will examine how Confluent and Gravitee.io integrate to unlock value from streaming data through API products.
You will learn how data owners and API providers can document, secure data products on top of Confluent brokers, including schema validation, topic routing and message filtering.
You will also see how data and API consumers can discover and subscribe to products in a developer portal, as well as how they can integrate with Confluent topics through protocols like REST, Websockets, Server-sent Events and Webhooks.
Whether you want to monetize your real-time data, enable new integrations with partners, or provide self-service access to topics through various protocols, this webinar is for you!
🏎️Tech Transformation: DevOps Insights from the Experts 👩💻campbellclarkson
Connect with fellow Trailblazers, learn from industry experts Glenda Thomson (Salesforce, Principal Technical Architect) and Will Dinn (Judo Bank, Salesforce Development Lead), and discover how to harness DevOps tools with Salesforce.
What’s new in VictoriaMetrics - Q2 2024 UpdateVictoriaMetrics
These slides were presented during the virtual VictoriaMetrics User Meetup for Q2 2024.
Topics covered:
1. VictoriaMetrics development strategy
* Prioritize bug fixing over new features
* Prioritize security, usability and reliability over new features
* Provide good practices for using existing features, as many of them are overlooked or misused by users
2. New releases in Q2
3. Updates in LTS releases
Security fixes:
● SECURITY: upgrade Go builder from Go1.22.2 to Go1.22.4
● SECURITY: upgrade base docker image (Alpine)
Bugfixes:
● vmui
● vmalert
● vmagent
● vmauth
● vmbackupmanager
4. New Features
* Support SRV URLs in vmagent, vmalert, vmauth
* vmagent: aggregation and relabeling
* vmagent: Global aggregation and relabeling
* vmagent: global aggregation and relabeling
* Stream aggregation
- Add rate_sum aggregation output
- Add rate_avg aggregation output
- Reduce the number of allocated objects in heap during deduplication and aggregation up to 5 times! The change reduces the CPU usage.
* Vultr service discovery
* vmauth: backend TLS setup
5. Let's Encrypt support
All the VictoriaMetrics Enterprise components support automatic issuing of TLS certificates for public HTTPS server via Let’s Encrypt service: https://docs.victoriametrics.com/#automatic-issuing-of-tls-certificates
6. Performance optimizations
● vmagent: reduce CPU usage when sharding among remote storage systems is enabled
● vmalert: reduce CPU usage when evaluating high number of alerting and recording rules.
● vmalert: speed up retrieving rules files from object storages by skipping unchanged objects during reloading.
7. VictoriaMetrics k8s operator
● Add new status.updateStatus field to the all objects with pods. It helps to track rollout updates properly.
● Add more context to the log messages. It must greatly improve debugging process and log quality.
● Changee error handling for reconcile. Operator sends Events into kubernetes API, if any error happened during object reconcile.
See changes at https://github.com/VictoriaMetrics/operator/releases
8. Helm charts: charts/victoria-metrics-distributed
This chart sets up multiple VictoriaMetrics cluster instances on multiple Availability Zones:
● Improved reliability
● Faster read queries
● Easy maintenance
9. Other Updates
● Dashboards and alerting rules updates
● vmui interface improvements and bugfixes
● Security updates
● Add release images built from scratch image. Such images could be more
preferable for using in environments with higher security standards
● Many minor bugfixes and improvements
● See more at https://docs.victoriametrics.com/changelog/
Also check the new VictoriaLogs PlayGround https://play-vmlogs.victoriametrics.com/
Hyperledger Besu 빨리 따라하기 (Private Networks)wonyong hwang
Hyperledger Besu의 Private Networks에서 진행하는 실습입니다. 주요 내용은 공식 문서인https://besu.hyperledger.org/private-networks/tutorials 의 내용에서 발췌하였으며, Privacy Enabled Network와 Permissioned Network까지 다루고 있습니다.
This is a training session at Hyperledger Besu's Private Networks, with the main content excerpts from the official document besu.hyperledger.org/private-networks/tutorials and even covers the Private Enabled and Permitted Networks.
Streamlining End-to-End Testing Automation with Azure DevOps Build & Release Pipelines
Automating end-to-end (e2e) test for Android and iOS native apps, and web apps, within Azure build and release pipelines, poses several challenges. This session dives into the key challenges and the repeatable solutions implemented across multiple teams at a leading Indian telecom disruptor, renowned for its affordable 4G/5G services, digital platforms, and broadband connectivity.
Challenge #1. Ensuring Test Environment Consistency: Establishing a standardized test execution environment across hundreds of Azure DevOps agents is crucial for achieving dependable testing results. This uniformity must seamlessly span from Build pipelines to various stages of the Release pipeline.
Challenge #2. Coordinated Test Execution Across Environments: Executing distinct subsets of tests using the same automation framework across diverse environments, such as the build pipeline and specific stages of the Release Pipeline, demands flexible and cohesive approaches.
Challenge #3. Testing on Linux-based Azure DevOps Agents: Conducting tests, particularly for web and native apps, on Azure DevOps Linux agents lacking browser or device connectivity presents specific challenges in attaining thorough testing coverage.
This session delves into how these challenges were addressed through:
1. Automate the setup of essential dependencies to ensure a consistent testing environment.
2. Create standardized templates for executing API tests, API workflow tests, and end-to-end tests in the Build pipeline, streamlining the testing process.
3. Implement task groups in Release pipeline stages to facilitate the execution of tests, ensuring consistency and efficiency across deployment phases.
4. Deploy browsers within Docker containers for web application testing, enhancing portability and scalability of testing environments.
5. Leverage diverse device farms dedicated to Android, iOS, and browser testing to cover a wide range of platforms and devices.
6. Integrate AI technology, such as Applitools Visual AI and Ultrafast Grid, to automate test execution and validation, improving accuracy and efficiency.
7. Utilize AI/ML-powered central test automation reporting server through platforms like reportportal.io, providing consolidated and real-time insights into test performance and issues.
These solutions not only facilitate comprehensive testing across platforms but also promote the principles of shift-left testing, enabling early feedback, implementing quality gates, and ensuring repeatability. By adopting these techniques, teams can effectively automate and execute tests, accelerating software delivery while upholding high-quality standards across Android, iOS, and web applications.
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
Ensuring Efficiency and Speed with Practical Solutions for Clinical OperationsOnePlan Solutions
Clinical operations professionals encounter unique challenges. Balancing regulatory requirements, tight timelines, and the need for cross-functional collaboration can create significant internal pressures. Our upcoming webinar will introduce key strategies and tools to streamline and enhance clinical development processes, helping you overcome these challenges.
13. 13/123
2015
Redis features
Redis 特性
Pure
✔ ANSI C.
✔ Lesser 3rd
-party libraries.
✔ Memcached depends on libevent.
✔ Redis implement its own epoll event loop.
✔ KISS principle.
✔ Data structure do what it should do.
簡純
✔ ANSI C 撰寫。
✔ 幾乎不依賴第三方函式庫。
✔ memcached 使用 libevent ,程式碼龐大。
✔ Redis 參考 libevent 實現了自己的 epoll event loop 。
✔ KISS 原則。
✔ 每個數據結構只負責自己應當做的。
15. 15/123
2015
Redis features
Ref: http://oldblog.antirez.com/post/redis-manifesto.html
5 - We're against complexity. We believe designing
systems is a fight against complexity. We'll accept to
fight the complexity when it's worthwhile but we'll try
hard to recognize when a small feature is not worth
1000s of lines of code. Most of the time the best way
to fight complexity is by not creating it at all.
16. 16/123
2015
Redis features
Redis 特性
Single thread
✔ No thread context switch.
✔ No thread race condition.
✔ No other complicated condition.
單執行緒
✔ No thread context switch.
✔ No thread race condition.
✔ No other complicated condition.
19. 2015
19/123
Agenda
✔ Redis history
✔ Redis 3.0
✔ Redis features
✔ Redis and Memcached
✔ Redis and Aerospike
✔ Insight on the pit
議程
✔ Redis 歷史
✔ Redis 3.0
✔ Redis 特性
✔ Redis 與 Memcached
✔ Redis 與 Aerospike
✔ 坑裡的洞見
20. 20/123
2015
Redis and Memcached
Redis 與 Memcached
✔ Redis is single thread IO multiplexing model.
✔ Simple operations to archive high throughput.
✔ Complicated (heavy) operations may block others.
✔ One instance usually only use one CPU.
✔ Redis 是單執行緒 IO 多路複用模式。
✔ 簡單的操作可以達到高吞吐。
✔ 複雜的操作容易阻塞其它的操作。
✔ 一個 Redis 實例通常只會用到一顆 CPU 。
22. 22/123
2015
Redis and Memcached
Redis 與 Memcached
✔ Redis can use jemalloc or tcmalloc to reduce
✔ memory fragmentation.
✔ But it depends on the allocation patterns.
✔ Rarely use the Free-list and other ways to optimize
✔ memory allocation.
✔ Redis is simple / pure / efficiency design.
✔ Redis 使用 jemalloc 或 tcmalloc 降低記憶體碎片。
✔ 但記憶體碎片的情形仍依賴於分配模式。
✔ 幾乎不用 Free-list 及其它方法來最佳化記憶體分配。
✔ 符合 Redis 簡單 / 單純 / 效率的設計原則。
Ref: http://www.databaseskill.com/1256096/
Ref: http://stackoverflow.com/questions/18097670/why-the-memory-fragmentation-is-less-than-1-in-redis
23. 23/123
2015
Redis and Memcached
Redis 與 Memcached
✔ Memcached use pre-allocated / slot memory pool.
✔ slot and pool can reduce memory fragmentation.
✔ But bring some wasted space. (memory overhead)
✔ Memcached 使用預分配 slot 記憶體池。
✔ slot 及池能有效降低某種程度的記憶體碎片。
✔ 但會帶來一些空間浪費的問題。 (memory overhead)
24. 24/123
2015
Redis and Memcached
Redis 與 Memcached
✔ Garbage Collection behavior: approximate LRU.
✔ Redis 2.6
✔ Random pick 3 samples, removed the oldest one,
✔ repeatedly until memory used less than
✔ 'maxmemory' limit.
✔ 垃圾回收行為:近似 LRU 演算法。
✔ Redis 2.6
✔ 預設隨機取 3 個樣本,移除最舊的該筆,如此反覆,
✔ 直到記憶體用量小於 maxmemory 的設定。
Ref: https://github.com/antirez/redis/blob/2.6/src/redis.c#L2464
26. 26/123
2015
Redis and Memcached
Redis 與 Memcached
✔ Garbage Collection behavior: approximate LRU.
✔ Redis 3.0
✔ Default random pick 5 samples, insert/sort into
✔ a pool, remove the best one, repeatedly until
✔ memory used less than 'maxmemory' limit.
✔ 5 (now) is more than 3 (before) samples ;
✔ The best one is more approximate global.
✔ 垃圾回收行為: approximate LRU 。
✔ Redis 3.0
✔ 預設隨機取 5 個樣本,插入並排序至一個 pool ,移除
✔ 最佳者,如此反覆,直到記憶體用量小於 maxmemory
✔ 的設定。
✔ 樣本 5 比先前的 3 多;
✔ 從局部最優趨向全局最優。
Ref: https://github.com/antirez/redis/blob/3.0/src/redis.c#L3251
32. 32/123
2015
Redis and Memcached
Redis 與 Memcached
✔ My testbed (NO WARRANTY)
✔ Get: Memcached is usually faster than Redis.
✔ Set: Redis is usually faster than Memcached.
✔ Size from 0 ~ 100KB is better for Redis.
✔ Size from 100KB ~ 10MB is better for Memcached.
✔ Size from 10M ~ is better for Redis.
✔ 我的使用經驗 ( 免責聲明 )
✔ Get 時, Memcached 比 Redis 快。
✔ Set 時, Redis 比 Memcached 快。
✔ 數據 0~100KB 時,適合 Redis 。
✔ 數據 100KB~10MB 時,適合 Memcached 。
✔ 數據 10M 以上時,適合 Redis 。
37. 37/123
2015
Redis and Aerospike
Ref: http://lynnlangit.com/2015/01/28/lessons-learned-benchmarking-nosql-on-the-aws-cloud-aerospikedb-and-redis/
38. 38/123
2015
Redis and Aerospike
Ref: http://lynnlangit.com/2015/01/28/lessons-learned-benchmarking-nosql-on-the-aws-cloud-aerospikedb-and-redis/
39. 39/123
2015
Redis and Aerospike
Redis 與 Aerospike
✔ Itamar Haber (Redis Labs, Chief Developers Advocate)
✔ Why didn't … use … pipelining and multi-key
✔ operations?
✔ Missing piece is a 20%-80% read/write test and a
✔ 100% write test.
✔ Totally unexplained by the fact that she used AOF.
✔ Comparisons are as hard to do right as they are
✔ easy to do wrong.
✔ Itamar Haber (Redis Labs 公司的首席開發者推廣師 )
✔ 為什麼不用 Redis 推薦做法,如使用 piplining 和多鍵操作。
✔ 沒有測試工作負載: 20%-80% 讀寫和 100% 寫的情境。
✔ 對於 AOF ,一般都是建議非主 Redis 實例執行。
✔ 最後,比較是一件很難做對卻很容易做錯的事。
Ref: https://redislabs.com/blog/the-lessons-missing-from-benchmarking-nosql-on-the-aws-cloud-aerospikedb-and-redis
40. 40/123
2015
Redis and Aerospike
Redis 與 Aerospike
✔ Salvatore Sanfilippo (antirez, the author of Redis)
✔ GET/SET Benchmarks are not a great way to
✔ compare different database systems.
✔ A better performance comparison is by use case.
✔ Test with instance types most people are going to
✔ actually use, huge instance types can mask
✔ inefficiencies of certain database systems, and is
✔ anyway not what most people are going to use.
✔ Salvatore Sanfilippo (antirez, Redis 作者 )
✔ GET/SET 不能比較出資料庫間的效能差異。
✔ 效能是需要依據業務場景而定。
✔ 測試應當依據大多數用戶的實際案例,太多的案例會掩蓋
✔ 某些資料庫的低效率,而且這樣的案例也不是大多數用戶
✔ 會遇到的。
Ref: http://antirez.com/news/85
42. 42/123
2015
Redis and Aerospike
Redis 與 Aerospike
✔ However, as the network shifts, … .By the time of the
✔ final read, about 10% of the increment operations
✔ have been lost.
✔ Just like the CaS register test, increment and read
✔ latencies will jump from ~1 millisecond to ~500
✔ milliseconds when a partition occurs.
✔ Aerospike can service every request successfully,
✔ peaking at ~2 seconds.
✔ 當 Network partition 發生時, Aerospike 會在某個很短的
✔ 時間內丟失操作。以每秒 500 次的 increment operations
✔ 測試,丟失約 10% 的寫入。
✔ 在 partition 完成後,會有幾秒很明顯的 latency 高峰出現。
✔ Aerospike 即使在已經執行已久的 partition 中,也會出現
✔ 服務中斷的情形,中斷甚至長達 2 秒。
Ref: http://antirez.com/news/85
43. 43/123
2015
Redis and Aerospike
Redis 與 Aerospike
✔ In the summer of 2013 we faced exactly this problem:
✔ big-memory (192 GB RAM) server nodes were running
✔ out of memory and crashing again … We were being
✔ bitten by fragmentation.
✔ 2013 年夏天, Aerospike 突然有一台 192 GB RAM 的伺服器
✔ 因記憶體用盡而當機, ASMalloc 工具未查出 memory leak ,
✔ 所以看來是因為記憶體碎片造成的。
Ref: http://highscalability.com/blog/2015/3/17/in-memory-computing-at-aerospike-scale-when-to-choose-and-ho.html
44. 2015
44/123
Agenda
✔ Redis history
✔ Redis 3.0
✔ Redis features
✔ Redis and Memcached
✔ Redis and Aerospike
✔ Insight on the pit
議程
✔ Redis 歷史
✔ Redis 3.0
✔ Redis 特性
✔ Redis 與 Memcached
✔ Redis 與 Aerospike
✔ 坑裡的洞見
47. 47/123
2015
Insight on the pit 【 Server-side sessions with Redis 】
Ref: http://vc2tea.com/redis-session/
48. 48/123
2015
Insight on the pit 【 Server-side sessions with Redis 】
坑裡的洞見【使用 Redis 共享 Sessions 】
✔ Redis has many eviction policies, but most of them
✔ are based on 'sampling'.
✔ This means eviction item is not global optimization,
✔ but local optimization.
✔ When reach 'maxmemory', it may evict items not
✔ old enough.
✔ Users get logged out early, and the worst is you
✔ won’t even notice it, until users start complaining.
✔ Redis 有很多種移除舊數據的策略,但大多基於「抽樣」。
✔ 這意謂移除舊數據不是全局最優而是局部最優。
✔ 當達到 'maxmemory' 上限時,可能造成移除的數據「不
✔ 夠舊」。
✔ 使得使用者提前被登出。最糟的是,你可能都不會知道,
✔ 直到使用者開始抱怨。
Ref: http://redis.io/topics/lru-cache
49. 49/123
2015
Insight on the pit 【 Server-side sessions with Redis 】
坑裡的洞見【使用 Redis 共享 Sessions 】
✔ Alternative solutions.
✔ Use database as an another back-end.
✔ 1. When write session, set both in Redis and
✔ database.
✔ 2. When read session, Redis first, database
✔ second.
✔ Redis 3.0.
✔ More 'sampling'.
✔ 替代方案。
✔ 使用資料庫為另一儲存後台。
✔1. 寫入 Session 時,同時寫進 Redis 及資料庫。
✔2. 讀出 Session 時, Redis 優先,資料庫其次。
✔ 使用 Redis 3.0 。
✔ 選擇較大的 'sampling' ( 抽樣數 ) 。
50. 50/123
2015
Insight on the pit 【 Server-side sessions with Redis 】
坑裡的洞見【使用 Redis 共享 Sessions 】
✔ A better way is … (thinking)
✔ 還有其它的解法 !!! ( 思考 )
53. 53/123
2015
Insight on the pit 【 Maximize CPUs usage 】
坑裡的洞見【善用多核 CPU 】
✔ Redis is single thread.
✔ One instance usually only use one CPU.
✔ (background threads.)
✔ (background tasks, such as BGSAVE, AOF rewrite.)
✔ Redis 是單執行緒。
✔ 一個 Redis 實例通常只會用到一顆 CPU 。
✔ ( 背景執行緒 )
✔ ( 背景工作,例如 BGSAVE 及 AOF rewrite)
55. 55/123
2015
Insight on the pit 【 Maximize CPUs usage 】
坑裡的洞見【善用多核 CPU 】
✔ Maximize CPUs usage.
✔ Redis instances is same as CPU cores.
✔ But,
✔ 1.Set 'maxmemory' for each instance carefully.
✔ 2.Instance should have different 'dbfilename'.
✔ 3.Instance should have different 'appendfilename'.
✔ 善用多核 CPU 。
✔ 啟動的 Redis 實例與 CPU 核心數一樣多。
✔ 但,
✔ 1. 每個實例的 'maxmemory' 需要小心配置。
✔ 2. 每個實例的 'dbfilename' 需要不一樣。
✔ 3. 每個實例的 'appendfilename' 需要不一樣。
58. 58/123
2015
Insight on the pit 【 Memory optimization 】
坑裡的洞見【記憶體優化】
✔ Memory fragmentation.
✔ SET.
✔ rehash.
✔ When hash table needs to switch to a bigger
✔ or smaller table this happens incrementally.
✔ 記憶體碎片。
✔ SET 。
✔ rehash 。
✔ 當 dict 鍵值持續增加時,為保持良好的效能, dict
✔ 需要執行 rehash 。
59. 59/123
2015
Insight on the pit 【 Memory optimization 】
Ref: http://redisbook.readthedocs.org/en/latest/internal-datastruct/dict.html
60. 60/123
2015
Insight on the pit 【 Memory optimization 】
坑裡的洞見【記憶體優化】
✔ Key name length.
✔ Shorter is better.
✔ But also meaningful ones.
✔ “product:user1:count” is better than “pu1c”.
✔ Key 命名長度。
✔ 長度愈短愈好。
✔ 但還是要有意義。
✔ “product:user1:count” ”優於 pu1c” 。
61. 61/123
2015
Insight on the pit 【 Memory optimization 】
坑裡的洞見【記憶體優化】
✔ Ziplist.
✔ The ziplist is a specially encoded dually linked
✔ list that is designed to be very memory efficient.
✔ Ziplist is space efficient.
✔ Ziplist 。
✔ 符合某種設定下,資料結構會以 Ziplist 方式儲存。
✔ 類似一維線性儲存,省去大量的指針開銷。
Ref: http://redis.io/topics/memory-optimization
63. 63/123
2015
Insight on the pit 【 Memory optimization 】
坑裡的洞見【記憶體優化】
✔ Ziplist.
✔ hash-max-ziplist-entries 64
✔ hash-max-ziplist-value 512
✔ Use ziplist if entries count ≦ 64 or
✔ every entry size ≦ 512.
✔ Ziplist 。
✔ hash-max-ziplist-entries 64
✔ hash-max-ziplist-value 512
✔ 如果 Hash ≦的數量 64 ,或其中一個 Hash ≦的值
✔ 512 ,則使用 Ziplist 。
64. 64/123
2015
Insight on the pit 【 Memory optimization 】
坑裡的洞見【記憶體優化】
✔ Ziplist.
✔ Twitter use case.
✔ A Redis ziplist threshold is set to the max size
✔ of a Timeline. Never store a bigger Timeline
✔ than can be stored in a ziplist.
✔ Ziplist 。
✔ Twitter 的案例。
✔ Ziplist 的數量設定與 Timelines 的最大數量一致;
✔ Timeline 的儲存大小也不會超過 Ziplist 的上限。
Ref: http://highscalability.com/blog/2014/9/8/how-twitter-uses-redis-to-scale-105tb-ram-39mm-qps-10000-ins.html
65. 65/123
2015
Insight on the pit 【 Memory optimization 】
坑裡的洞見【記憶體優化】
✔ REDIS_SHARED_INTEGERS.
✔ Default is 10,000.
✔ Integers can be stored in a shared memory pool,
✔ and don't have any memory overheads.
✔ REDIS_SHARED_INTEGERS 。
✔ 預設是 10,000 。
✔ 整數 ( 包括 0) 可以預分配在共享池,避免重複分配而節省
✔ 記憶體。
66. 66/123
2015
Insight on the pit 【 Memory optimization 】
Ref: http://redisbook.readthedocs.org/en/latest/datatype/object.html
Flyweight
src/redis.h
67. 67/123
2015
Insight on the pit 【 Memory optimization 】
坑裡的洞見【記憶體優化】
✔ Bitmaps.
✔ HyperLogLogs.
✔ Bitmaps.
✔ HyperLogLogs.
71. 71/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Twemproxy (Twitter)
✔ Twemproxy is proxy-based solution.
✔ Good parts
✔ Stable, enterprise ready.
✔ Twemproxy (Twitter)
✔ 代理分片機制。
✔ 優點
✔ 非常穩定,企業級方案。
72. 72/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Twemproxy (Twitter)
✔ Bad parts
✔ SPOF (Single Point Of Failure)
✔ Keepalived etc.
✔ Smoothless on scale.
✔ No dashboard.
✔ Proxy-based, more route trip times, higher latency.
✔ Single-threaded proxy model.
✔ Twemproxy (Twitter)
✔ 缺點
✔ 單點故障。
✔ 需依賴第三方軟體,如 Keepalived 。
✔ 無法平滑地橫向擴展。
✔ 沒有後台介面。
✔ 代理分片機制引入更多的來回次數並提高延遲。
✔ 單核模式,無法充份利用多核,除非多實例。
73. 73/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Twemproxy (Twitter)
✔ Bad parts
✔ Twemproxy is not used by Twitter internally.
✔ Twemproxy (Twitter)
✔ 缺點
✔ Twitter 官方內部不再繼續使用 Twemproxy 。
Ref: http://highscalability.com/blog/2014/9/8/how-twitter-uses-redis-to-scale-105tb-ram-39mm-qps-10000-ins.html
74. 74/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Codis ( 豌豆荚 )
✔ Codis is proxy-based solution.
✔ 豌豆莢 open source on Jan 2014.
✔ Written in Go and C.
✔ Codis ( 豌豆荚 )
✔ 代理分片機制。
✔ 豌豆莢於 2014 年 11 月開放源碼。
✔ 基於 Go 與 C 開發。
75. 75/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Codis ( 豌豆荚 )
✔ Good parts
✔ Stable, enterprise ready.
✔ Auto Rebalance.
✔ High performance.
✔ Simple testbed is faster 100% than Twemproxy.
✔ Multi-threaded proxy model.
✔ Codis ( 豌豆荚 )
✔ 優點
✔ 非常穩定,企業級方案。
✔ 數據自動平衡。
✔ 高效能。
✔ 簡單的測試顯示較 Twemproxy 快一倍。
✔ 善用多核 CPU 。
76. 76/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Codis ( 豌豆荚 )
✔ Good parts
✔ Simple
✔ No paxos-like coordinators,
✔ No master-slave replication.
✔ Dashboard.
✔ Codis ( 豌豆荚 )
✔ 優點
✔ 簡單。
✔ 沒有 Paxos 類的協調機制。
✔ 沒有主從複製。
✔ 有後台介面。
77. 77/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Codis ( 豌豆荚 )
✔ Bad parts
✔ Proxy-based, more route trip times, higher latency.
✔ Need 3rd
-party coordinators
✔ Zookeeper or Etcd.
✔ No master-slave replication.
✔ Codis ( 豌豆荚 )
✔ 缺點
✔ 代理分片機制引入更多的來回次數並提高延遲。
✔ 需要第三方軟體支持協調機制。
✔ 目前支援 Zookeeper 及 Etcd 。
✔ 不支援主從複製,需要另外實作。
79. 79/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Redis Cluster (Official)
✔ Official supports.
✔ Requires Redis 3.0 or higher.
✔ Redis Cluster ( 官方 )
✔ 官方支援。
✔ 需要 Redis 3.0 或更高版本。
80. 80/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Redis Cluster (Official)
✔ Good parts
✔ Official supports.
✔ Pear-to-pear Gossip distributed model.
✔ Less route trip times, lower latency.
✔ Automatically sharded across multiple Redis nodes.
✔ Do not need 3rd
-party coordinators
✔ Redis Cluster ( 官方 )
✔ 優點
✔ 官方支援。
✔ 無中心的 P2P Gossip 分散式模式。
✔ 更少的來回次數並降低延遲。
✔ 自動於多個 Redis 節點進行分片。
✔ 不需要第三方軟體支持協調機制。
81. 81/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Redis Cluster (Official)
✔ Bad parts
✔ Requires Redis 3.0 or higher.
✔ Need time to prove its stability.
✔ No dashboard.
✔ Need smart client.
✔ Redis client need to support for Redis Cluster.
✔ More maintenance cost than Codis.
✔ Redis Cluster ( 官方 )
✔ 缺點。
✔ 需要 Redis 3.0 或更高版本。
✔ 需要時間驗證其穩定性。
✔ 沒有後台介面。
✔ 需要智能客戶端。
✔ Redis 客戶端必須支援 Redis Cluster 設計。
✔ 較 Codis 有更多的維護升級成本。
82. 82/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Cerberus (HunanTV)
✔ Good parts
✔ Auto Rebalance.
✔ Implement Redis's Smart Client.
✔ Read-write split.
✔ Cerberus ( 芒果 TV)
✔ 優點
✔ 數據自動平衡。
✔ 本身實現了 Redis 的 Smart Client 。
✔ 支援讀寫分離。
Ref: https://github.com/HunanTV/redis-cerberus
83. 83/123
2015
Insight on the pit 【 Availability 】
坑裡的洞見【可用性】
✔ Cerberus (HunanTV)
✔ Bad parts
✔ Requires Redis 3.0 or higher.
✔ Proxy-based, more route trip times, higher latency.
✔ Need time to prove its stability.
✔ No dashboard.
✔ Cerberus ( 芒果 TV)
✔ 缺點
✔ 需要 Redis 3.0 或更高版本。
✔ 代理分片機制引入更多的來回次數並提高延遲。
✔ 需要時間驗證其穩定性。
✔ 沒有後台介面。
86. 86/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Performance fluctuation.
✔ Out of memory.
✔ Redis instances is same as CPU cores.
✔ Big Ziplist.
✔ Master-slave.
✔ 效能抖動。
✔ 記憶體不足。
✔ 啟動的 Redis 實例與 CPU 核心數一樣多。
✔ Big Ziplist 。
✔ 主從模式。
87. 87/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Performance fluctuation.
✔ For production, stabilization is more important than
✔ average performance.
✔ Easy to estimated, reduce the chances of an
✔ important moment occurred at lower point.
✔ Redis is single thread.
✔ 效能抖動。
✔ 對於一個上線服務而言,穩定性遠大於平均效能。
✔ 效能防抖動,好預估,降低重要時刻發生在低點的機率。
✔ Redis 是單執行緒。
90. 90/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Out of memory (OOM).
✔ Be careful those commands will ask huge memory.
✔ Reduce the chances of Redis to be killed by OOM.
✔ SWAP, lose a little performance is better than crash.
✔ 記憶體不足 (Out of memory, OOM) 。
✔ 留意那些會大量耗用記憶體的指令。
✔ 降低 Redis 強制被 Out of memory 關閉的機率。
✔ 開啟 SWAP ,效能下降總比服務停用來得好。
91. 91/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Out of memory (OOM).
✔ maxmemory
✔ overcommit_memory
✔ SWAP
✔ zone_reclaim_mode
✔ oom_adj
✔ 記憶體不足 (Out of memory, OOM) 。
✔ maxmemory
✔ overcommit_memory
✔ SWAP
✔ zone_reclaim_mode
✔ oom_adj
92. 92/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Out of memory (OOM).
✔ maxmemory
✔ A rule of thumbs is 50% of total memory.
✔ BGSAVE.
✔ AOF rewrite.
✔ 記憶體不足 (Out of memory, OOM) 。
✔ maxmemory
✔ 經驗法則是設定為總記憶體的 50% 。
✔ BGSAVE 。
✔ AOF rewrite 。
Ref: http://redis.io/topics/admin
93. 93/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Out of memory (OOM).
✔ overcommit_memory
✔ overcommit_memory = 1
✔ Do overcommit.
✔ 記憶體不足 (Out of memory, OOM) 。
✔ maxmemory
✔overcommit_memory = 1
✔ 請求分配記憶體時,永遠假裝還有足夠的記憶體。
94. 94/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Out of memory (OOM).
✔ SWAP
✔ Use SWAP, and same size of memory.
✔ 記憶體不足 (Out of memory, OOM) 。
✔ SWAP
✔ 使用 SWAP ,並且與記憶體一樣大。
95. 95/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Out of memory (OOM).
✔ zone_reclaim_mode
✔ zone_reclaim_mode = 0 (default)
✔ 記憶體不足 (Out of memory, OOM) 。
✔ zone_reclaim_mode
✔ zone_reclaim_mode = 0 ( 預設 )
97. 97/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Out of memory (OOM).
✔ If (1) then oom_adj.
✔ echo -15 > /proc/`pidof redis-server`/oom_adj
✔ Reduce the chances of redis to be killed.
✔ Tips
✔ for i in $(pidof redis-server);
✔ do echo -15 | sudo tee /proc/$i/oom_adj ; done
✔ 記憶體不足 (Out of memory, OOM) 。
✔ 如果 (1) 則 oom_adj 。
✔ echo -15 > /proc/`pidof redis-server`/oom_adj
✔ 降低 Redis 強制被 Out of memory 關閉的機率。
✔ Tips
✔ for i in $(pidof redis-server);
✔ do echo -15 | sudo tee /proc/$i/oom_adj ; done
99. 99/123
2015
Insight on the pit 【 Stabilization 】
Linux Kernel 3.4 (mm/vmscan.c)
Ref: https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.4.tar.xz
Ref: https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.5.1.tar.gz
Linux Kernel 3.5.1 (mm/vmscan.c)
100. 100/123
2015
Insight on the pit 【 Stabilization 】
linux-2.6.32-504.12.2.el6 (CentOS 6.4, mm/vmscan.c)
Ref: http://rpm.pbone.net/index.php3/stat/3/srodzaj/2/search/kernel-2.6.32-504.12.2.el6.src.rpm
101. 101/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Redis instances is same as CPU cores.
✔ Redis have some background tasks.
✔ fsync file descriptor.
✔ close file descriptor.
✔ BGSAVE.
✔ AOF rewrite.
✔ Preserved CPU to do those tasks.
✔ 啟動的 Redis 實例與 CPU 核心數一樣多。
✔ Redis 會執行一些 background tasks 。
✔ fsync file descriptor 。
✔ close file descriptor 。
✔ BGSAVE 。
✔ AOF rewrite 。
✔ 預留一些 CPU 執行這些 tasks 。
102. 102/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Redis instances is same as CPU cores.
✔ Instance have its own synchronization.
✔ Disable automatic on BGSAVE / BGREWRITEAOF,
✔ and use manual control instead.
✔ Avoid execution at the same time.
✔ 啟動的 Redis 實例與 CPU 核心數一樣多。
✔ 每個實例都有自己的同步機制。
✔ 關閉自動 BGSAVE / BGREWRITEAOF ,改為手動。
✔ 避免各實例同時啟動,耗用大量資源。
103. 103/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ Master-slave.
✔ Best practices.
✔ N Redis nodes.
✔ 1 master, 1 slave, N-2 slaves of slave.
✔ Never restart all or multiple slave instances.
✔ (Master) High CPU loading.
✔ (Master) May out of memory.
✔ 主從模式。
✔ 最佳實踐。
✔ N 台 Redis 。
✔ 1 台主服務, 1 台從服務, N-2 台從服務的從服務。
✔ 不要同時重啟所有或大量的 slave 實例。
✔ 造成主服務 CPU 負載過高。
✔ 造成主服務記憶體用量過高。
104. 104/123
2015
Insight on the pit 【 Stabilization 】
坑裡的洞見【穩定性】
✔ String value.
✔ String value can be at max 512 MB in length.
✔ A rule of thumbs is no more than 5KB.
✔ 字串值。
✔ 字串值最大可以儲存 512MB 的長度。
✔ 經驗上最好不要大於 5KB 。
107. 107/123
2015
Insight on the pit 【 Low latency 】
坑裡的洞見【低延遲】
✔ Durability vs latency tradeoffs, from higher to lower
latency.
✔ AOF + fsync always.
✔ AOF + fsync every second.
✔ AOF + fsync every second +
✔ No-appendfsync-on-rewrite set to yes.
✔ AOF + fsync nerver.
✔ RDB.
✔ 數據持久性 vs 延遲性的權衡,延遲性從高至低排列。
✔ AOF + fsync always 。
✔ AOF + fsync every second 。
✔ AOF + fsync every second +
✔ No-appendfsync-on-rewrite set to yes 。
✔ AOF + fsync nerver 。
✔ RDB 。
Ref: http://redis.io/topics/latency
108. 108/123
2015
Insight on the pit 【 Low latency 】
坑裡的洞見【低延遲】
✔ Latency induced by network and communication.
✔ Reduce the numbers of commands.
✔ Pipelining.
✔ MSET / MGET.
✔ 網路造成的延遲性。
✔ 減少指令的使用次數。
✔ Pipelining 。
✔ MSET / MGET 。
Ref: http://redis.io/topics/latency
109. 109/123
2015
Insight on the pit 【 Low latency 】
✔ Fork time in different systems.
✔ 不同系統間的 Fork 時間。
Ref: http://redis.io/topics/latency
Linux on physical machine (Xeon@2.27Ghz)Linux on physical machine (Xeon@2.27Ghz)
Linux VM on EC2 (Xen)Linux VM on EC2 (Xen)
Linux beefy VM on VMwareLinux beefy VM on VMware
Linux on physical machine (Unknown HW)Linux on physical machine (Unknown HW)
Linux VM on 6sync (KVM)Linux VM on 6sync (KVM)
Linux VM on Linode (Xen)Linux VM on Linode (Xen)
9 ms/GB9 ms/GB
10 ms/GB10 ms/GB
12.8 ms/GB12.8 ms/GB
13.1 ms/GB13.1 ms/GB
23.3 ms/GB23.3 ms/GB
424 ms/GB424 ms/GB
Linux on physical machine (Xeon@2.27Ghz)Linux on physical machine (Xeon@2.27Ghz)
Linux VM on EC2 (Xen)Linux VM on EC2 (Xen)
Linux beefy VM on VMwareLinux beefy VM on VMware
Linux on physical machine (Unknown HW)Linux on physical machine (Unknown HW)
Linux VM on 6sync (KVM)Linux VM on 6sync (KVM)
Linux VM on Linode (Xen)Linux VM on Linode (Xen)
9 ms/GB9 ms/GB
10 ms/GB10 ms/GB
12.8 ms/GB12.8 ms/GB
13.1 ms/GB13.1 ms/GB
23.3 ms/GB23.3 ms/GB
424 ms/GB424 ms/GB
坑裡的洞見【低延遲】
110. 110/123
2015
Insight on the pit 【 Low latency 】
坑裡的洞見【低延遲】
✔ Never use Huge page.
✔ echo never >
/sys/kernel/mm/transparent_hugepage/enabled
✔ 永不用 Huge page 。
✔ echo never >
/sys/kernel/mm/transparent_hugepage/enabled
111. 111/123
2015
Insight on the pit 【 Low latency 】
坑裡的洞見【低延遲】
✔ Do you really need Proxy-based solution (Codis) ?
✔ 真的需要代理分片機制 ( 如 Codis) 嗎?
113. 113/123
2015
Insight on the pit 【 Low latency 】
坑裡的洞見【低延遲】
✔ Codis.
✔ Disable pipeline.
✔ Less CPU cores.
✔ Codis 。
✔ 停用 pipeline 。
✔ 少核 CPU 。
114. 114/123
2015
Insight on the pit 【 Low latency 】
Disable pipeline.
Ref: https://github.com/wandoulabs/codis/blob/master/doc/benchmark_zh.md
115. 115/123
2015
Insight on the pit 【 Low latency 】
Less CPU cores.
Ref: https://github.com/wandoulabs/codis/blob/master/doc/benchmark_zh.md
4 cores
12 cores 16 cores
8 cores
116. 116/123
2015
Insight on the pit 【 Low latency 】
坑裡的洞見【低延遲】
✔ Big Ziplist.
✔ Adding to and deleting from a ziplist is inefficient,
✔ especially with a very large list.
✔ Deleting from a ziplist uses memmove to move
✔ data around, to make sure the list is still contiguous.
✔ Adding to a ziplist requires a memory realloc call to
✔ make enough space for the new entry.
✔ Big Ziplist 。
✔ 從 Ziplist 中新增或刪除都沒有效率,尤其是 Big Ziplist 。
✔ 從 Ziplist 刪除會利用 memmove 移動資料,以確保 list
✔ 還是連續的。
✔ 在 Ziplist 中新增需要 memory realloc 以產出足夠的空間
✔ 供新值儲存。
Ref: http://highscalability.com/blog/2014/9/8/how-twitter-uses-redis-to-scale-105tb-ram-39mm-qps-10000-ins.html
117. 117/123
2015
Insight on the pit 【 Low latency 】
坑裡的洞見【低延遲】
✔ Big Ziplist.
✔ Potential high latency for write operations due to
✔ timeline size.
✔ Big Ziplist 。
✔ Ziplist 中的寫操作很可能會因 Big Ziplist 而帶來高延遲。
Ref: http://highscalability.com/blog/2014/9/8/how-twitter-uses-redis-to-scale-105tb-ram-39mm-qps-10000-ins.html
118. 118/123
2015
Insight on the pit 【 Low latency 】
坑裡的洞見【低延遲】
✔ Redis client.
✔ Connection pool.
✔ Keep alive.
✔ Redis 客戶端。
✔ Connection pool 。
✔ Keep alive 。