This document provides an overview of distributed caching solutions and summarizes key points about local caching, replicated caching, and distributed caching. It discusses common use cases for distributed caching and outlines some popular open source Java caching frameworks like EHCache, Infinispan, HazelCast, Memcached, and Terracotta Server. The document also includes examples of EHCache configuration and an overview of BigMemory, EHCache's off-heap memory solution.
This document summarizes benchmark tests of NoSQL document databases using MongoDB. It compares the performance of MongoDB's MapReduce and Aggregation Framework on single node and sharded cluster configurations. The tests measured query response times for common aggregation operations like counting most frequently mentioned users or hashed tags. The results showed that the Aggregation Framework was roughly 2 times faster than MapReduce. Scaling out to a sharded cluster with multiple nodes initially did not improve performance. However, partitioning the data across multiple shards in a modest 3 node cluster showed better performance than a single node, with query times decreasing as more shards were added up to an optimal number.
Advanced caching techniques with ehcache, big memory, terracotta, and coldfusionColdFusionConference
Rob Brooks-Bilson is a senior director at Amkor Technology who has been involved with ColdFusion for 18 years. He is the author of two books on ColdFusion programming and an Adobe Community Professional for ColdFusion. The document outlines his agenda for a presentation on caching in ColdFusion, which will cover caching tags and functions, Ehcache, replicating caches, BigMemory Go, and distributed caching with Terracotta. It provides legal disclaimers about the third-party applications discussed and their lack of official Adobe support.
This document discusses various goals, techniques, and solutions for replicating PostgreSQL databases. The goals covered are high availability, performance for reads and writes, supporting wide area networks, and handling offline peers. Techniques include master-slave and multi-master replication, proxies, and using standby systems. Specific solutions described are Slony-I, Slony-II, PGCluster, DBMirror, pgpool, WAL replication, Sequoia, DRBD, and shared storage. The document provides an overview of how each solution can help achieve different replication goals.
We all love Ehcache. But the rise of real-time Big Data means you want to keep larger amounts of data in memory with low, predictable latency. In this webinar,
we explain how BigMemory Go can turbocharge your Ehcache deployment.
Presented at JavaOne 2015.
JSR107, aka the Temporary Caching API for the Java Platform, has now been finalized almost 2 years ago. We've heard all about its ease of use and capabilities. But there is much left unaddressed. The good news is that the EG is looking at addressing many of the current shortcomings... But what do you do now? Go for proprietary APIs?!
Ehcache, the de facto caching API for 10 years now, has gone through a major API revamp: Ehcache3. One major theme, beyond its usual ease of use, was JSR107. Natively integrating it, but also looking beyond. With close to no API tie-ins, Ehcache3 lets you extend the JSR107 API transparently to go beyond the specification: topology-wise: whether you want to go offheap and scale up, or scale out by clustering your caches; functionality-wise: using transactional caches, automatic resource control or even using a write-behind cache to scale out writes...
Best of all is that this isn't only minimally intrusive, it is also all free to use and available as part of the open-source Ehcache v3 that has been GA'ed earlier this year...
The document discusses PostgreSQL high availability and scaling options. It covers horizontal scaling using load balancing and data partitioning across multiple servers. It also covers high availability techniques like master-slave replication, warm standby servers with point-in-time recovery, and using a heartbeat to prevent multiple servers from becoming a master. The document recommends an initial architecture with two servers using warm standby and point-in-time recovery with a heartbeat for high availability. It suggests scaling the application servers horizontally later on if more capacity is needed.
The document discusses various techniques for performance tuning and cluster administration in HBase, including garbage collection tuning, use of memstore-local allocation buffers (MSLAB), enabling compression, optimizing splits and compactions through pre-splitting regions, and addressing hotspotting through manual splits. It provides guidance on configuring garbage collection, compression codecs, and approaches for managing splits and compactions to reduce disk I/O loads.
Ehcache is an open source Java caching library that provides fast, scalable caching for applications. It allows for in-process caching with single nodes or distributed caching across multiple nodes. Ehcache provides features like memory and disk storage, replication, search capabilities, and integration with Terracotta for distributed caching. It uses common caching patterns like cache-aside, read-through, write-through, and cache-as-sor. Ehcache has a simple API and is lightweight, scalable, and standards-based.
This document summarizes benchmark tests of NoSQL document databases using MongoDB. It compares the performance of MongoDB's MapReduce and Aggregation Framework on single node and sharded cluster configurations. The tests measured query response times for common aggregation operations like counting most frequently mentioned users or hashed tags. The results showed that the Aggregation Framework was roughly 2 times faster than MapReduce. Scaling out to a sharded cluster with multiple nodes initially did not improve performance. However, partitioning the data across multiple shards in a modest 3 node cluster showed better performance than a single node, with query times decreasing as more shards were added up to an optimal number.
Advanced caching techniques with ehcache, big memory, terracotta, and coldfusionColdFusionConference
Rob Brooks-Bilson is a senior director at Amkor Technology who has been involved with ColdFusion for 18 years. He is the author of two books on ColdFusion programming and an Adobe Community Professional for ColdFusion. The document outlines his agenda for a presentation on caching in ColdFusion, which will cover caching tags and functions, Ehcache, replicating caches, BigMemory Go, and distributed caching with Terracotta. It provides legal disclaimers about the third-party applications discussed and their lack of official Adobe support.
This document discusses various goals, techniques, and solutions for replicating PostgreSQL databases. The goals covered are high availability, performance for reads and writes, supporting wide area networks, and handling offline peers. Techniques include master-slave and multi-master replication, proxies, and using standby systems. Specific solutions described are Slony-I, Slony-II, PGCluster, DBMirror, pgpool, WAL replication, Sequoia, DRBD, and shared storage. The document provides an overview of how each solution can help achieve different replication goals.
We all love Ehcache. But the rise of real-time Big Data means you want to keep larger amounts of data in memory with low, predictable latency. In this webinar,
we explain how BigMemory Go can turbocharge your Ehcache deployment.
Presented at JavaOne 2015.
JSR107, aka the Temporary Caching API for the Java Platform, has now been finalized almost 2 years ago. We've heard all about its ease of use and capabilities. But there is much left unaddressed. The good news is that the EG is looking at addressing many of the current shortcomings... But what do you do now? Go for proprietary APIs?!
Ehcache, the de facto caching API for 10 years now, has gone through a major API revamp: Ehcache3. One major theme, beyond its usual ease of use, was JSR107. Natively integrating it, but also looking beyond. With close to no API tie-ins, Ehcache3 lets you extend the JSR107 API transparently to go beyond the specification: topology-wise: whether you want to go offheap and scale up, or scale out by clustering your caches; functionality-wise: using transactional caches, automatic resource control or even using a write-behind cache to scale out writes...
Best of all is that this isn't only minimally intrusive, it is also all free to use and available as part of the open-source Ehcache v3 that has been GA'ed earlier this year...
The document discusses PostgreSQL high availability and scaling options. It covers horizontal scaling using load balancing and data partitioning across multiple servers. It also covers high availability techniques like master-slave replication, warm standby servers with point-in-time recovery, and using a heartbeat to prevent multiple servers from becoming a master. The document recommends an initial architecture with two servers using warm standby and point-in-time recovery with a heartbeat for high availability. It suggests scaling the application servers horizontally later on if more capacity is needed.
The document discusses various techniques for performance tuning and cluster administration in HBase, including garbage collection tuning, use of memstore-local allocation buffers (MSLAB), enabling compression, optimizing splits and compactions through pre-splitting regions, and addressing hotspotting through manual splits. It provides guidance on configuring garbage collection, compression codecs, and approaches for managing splits and compactions to reduce disk I/O loads.
Ehcache is an open source Java caching library that provides fast, scalable caching for applications. It allows for in-process caching with single nodes or distributed caching across multiple nodes. Ehcache provides features like memory and disk storage, replication, search capabilities, and integration with Terracotta for distributed caching. It uses common caching patterns like cache-aside, read-through, write-through, and cache-as-sor. Ehcache has a simple API and is lightweight, scalable, and standards-based.
Distributed Caching Using the JCACHE API and ehcache, Including a Case Study ...elliando dias
This document summarizes a presentation on distributed caching using the JCACHE API and ehcache. The presentation covers how to use ehcache to cache web pages, database queries, and configure distributed caching across multiple servers. It also discusses the JSR 107 JCACHE specification and its implementation in ehcache. The presentation concludes with a case study of caching at Wotif.com.
The document provides an introduction and agenda for an HBase presentation. It begins with an overview of HBase and discusses why relational databases are not scalable for big data through examples of a growing website. It then introduces concepts of HBase including its column-oriented design and architecture. The document concludes with hands-on examples of installing HBase and performing basic operations through the HBase shell.
This document discusses experiments conducted to determine the optimal hardware and software configurations for building a cost-efficient Swift object storage cluster with expected performance. It describes testing different configurations for proxy and storage nodes under small and large object upload workloads. The results show that for small object uploads, high-CPU instances performed best for storage nodes while either high-CPU or high-end instances worked well for proxies. For large object uploads, large instances were most cost-effective for storage nodes and high-end instances remained suitable for proxies. The findings provide guidance on right-sizing hardware based on workload characteristics.
This document provides an overview of HBase architecture and advanced usage topics. It discusses course credit requirements, HBase architecture components like storage, write path, read path, files, region splits and more. It also covers advanced topics like secondary indexes, search integration, transactions and bloom filters. The document emphasizes that HBase uses log-structured merge trees for efficient data handling and operates at the disk transfer level rather than disk seek level for performance. It also provides details on various classes involved in write-ahead logging.
This document discusses using memcached for caching data in memory across multiple servers. It introduces ReplCache, which is a Java implementation of memcached that adds redundancy by storing each key on multiple servers. ReplCache improves on memcached by allowing data to be migrated when servers shut down and rebalanced when new servers are added. The document also discusses Infinispan, an open source data grid platform that provides features similar to ReplCache along with additional capabilities such as persistence and querying.
Implementing High Availability Caching with MemcachedGear6
Typical Memcached deployments do not comprehensively address web site requirements for high availability. Depending on your web architecture, a single failure can disable your web caches. This presentation offers real world solutions to solving <a>high availability</a> challenges common to large, dynamic websites with Memcached, specifically:
* Options and benefits for deploying high availability services within Memcached
* How companies are approaching high availability
* Considerations on building and deploying high availability
o Recommendations for a typical Memcached environment
o Open source tools available
o High level costs for deployment
The document discusses several key factors for optimizing HBase performance including:
1. Reads and writes compete for disk, network, and thread resources so they can cause bottlenecks.
2. Memory allocation needs to balance space for memstores, block caching, and Java heap usage.
3. The write-ahead log can be a major bottleneck and increasing its size or number of logs can improve write performance.
4. Flushes and compactions need to be tuned to avoid premature flushes causing "compaction storms".
Building tungsten-clusters-with-postgre sql-hot-standby-and-streaming-replica...Command Prompt., Inc
Alex Alexander & Linas Virbalas
Hot standby and streaming replication will move the needle forward for high availability and scaling for a wide number of applications. Tungsten already supports clustering using warm standby. In this talk we will describe how to build clusters using the new PostgreSQL features and give our report from the trenches.
This talk will cover how hot standby and streaming replication work from a user perspective, then dive into a description of how to use them, taking Tungsten as an example. We'll cover the following issues:
* Configuration of warm standby and streaming replication
* Provisioning new standby instances
* Strategies for balancing reads across primary and standby database
* Managing failover
* Troubleshooting and gotchas
Please join us for an enlightening discussion a set of PostgreSQL features that are interesting to a wide range of PostgreSQL users.
OpenStack is rapidly gaining popularity with businesses as they realize the benefits of a private cloud architecture. This presentation was delivered by Dave Page, Chief Architect, Tools & Installers at EnterpriseDB & PostgreSQL Core Team member during PG Open 2014. He addressed some of the common components of OpenStack deployments, how they can affect Postgres servers, and how users might best utilize some of the features they offer when deploying Postgres, including:
• Different configurations for the Nova compute service
• Use of the Cinder block store
• Virtual networking options with Neutron
• WAL archiving with the Swift object store
The document discusses how Terracotta can be used to create lightweight grids for enterprise applications to improve availability and scalability. It provides an overview of challenges like single points of failure and bottlenecks. Terracotta allows clustering of Java applications across multiple JVMs in a transparent way and reduces the complexity of distributed computing. A simple example demonstrates how Terracotta can be used to distribute a queue across multiple nodes.
Memcached is a high-performance, distributed memory caching system that is used to speed up dynamic web applications by caching objects in memory to reduce database load. It works by storing objects in memory to allow for fast retrieval, improving response times significantly. Major companies that use memcached include Facebook, Yahoo, Amazon, and LiveJournal. It provides features like consistent hashing for object distribution, multithreading, and replication.
PostgreSQL High Availability in a Containerized WorldJignesh Shah
This document discusses high availability for PostgreSQL in a containerized environment. It outlines typical enterprise requirements for high availability including recovery time objectives and recovery point objectives. Shared storage-based high availability is described as well as the advantages and disadvantages of PostgreSQL replication. The use of Linux containers and orchestration tools like Kubernetes and Consul for managing containerized PostgreSQL clusters is also covered. The document advocates for using PostgreSQL replication along with services and self-healing tools to provide highly available and scalable PostgreSQL deployments in modern container environments.
Planning & Best Practice for Microsoft VirtualizationLai Yoong Seng
This document provides best practices for planning and configuring Microsoft virtualization. It discusses guidelines for Hyper-V hosts, virtual machines, SQL Server, Active Directory, Exchange Server, and SharePoint. It recommends using tools like MAP for assessment and following Microsoft's support policies. Key guidelines include processor and memory allocation, storage configuration, network optimization, and redundancy. The document aims to help users understand and apply Microsoft's virtualization best practices.
Rigorous and Multi-tenant HBase PerformanceCloudera, Inc.
The document discusses techniques for rigorously measuring Apache HBase performance in both standalone and multi-tenant environments. It introduces the Yahoo! Cloud Serving Benchmark (YCSB) and best practices for cluster setup, workload generation, data loading, and measurement. These include pre-splitting tables, warming caches, setting target throughput, and using appropriate workload distributions. The document also covers challenges in achieving good multi-tenant performance across HBase, MapReduce and Apache Solr.
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte DataJignesh Shah
This document discusses PostgreSQL performance on multi-core systems with multi-terabyte data. It covers current market trends towards more cores and larger data sizes. Benchmark results show that PostgreSQL scales well on inserts up to a certain number of clients/cores but struggles with OLTP and TPC-E workloads due to lock contention. Issues are identified with sequential scans, index scans, and maintenance tasks like VACUUM as data sizes increase. The document proposes making PostgreSQL utilities and tools able to leverage multiple cores/processes to improve performance on modern hardware.
HBaseCon 2012 | Learning HBase Internals - Lars Hofhansl, SalesforceCloudera, Inc.
The strength of an open source project resides entirely in its developer community; a strong democratic culture of participation and hacking makes for a better piece of software. The key requirement is having developers who are not only willing to contribute, but also knowledgeable about the project’s internal structure and architecture. This session will introduce developers to the core internal architectural concepts of HBase, not just “what” it does from the outside, but “how” it works internally, and “why” it does things a certain way. We’ll walk through key sections of code and discuss key concepts like the MVCC implementation and memstore organization. The goal is to convert serious “HBase Users” into HBase Developer Users”, and give voice to some of the deep knowledge locked in the committers’ heads.
This document discusses techniques for improving latency in HBase. It analyzes the write and read paths, identifying sources of latency such as networking, HDFS flushes, garbage collection, and machine failures. For writes, it finds that single puts can achieve millisecond latency while streaming puts can hide latency spikes. For reads, it notes cache hits are sub-millisecond while cache misses and seeks add latency. GC pauses of 25-100ms are common, and failures hurt locality and require cache rebuilding. The document outlines ongoing work to reduce GC, use off-heap memory, improve compactions and caching to further optimize for low latency.
This document provides an overview of caching and distributed caching principles. It discusses the goals of caching to improve performance by storing frequently accessed data closer to where it is needed. It explains concepts like memory hierarchy and why distributed caching is needed to manage huge amounts of data across multiple servers. Some key use cases of caching are also mentioned. The document discusses caching topologies like partitioned and replicated caching. It provides examples of caching patterns and load techniques. Finally, it discusses some prominent distributed caching solutions and shows sample code for using Hazelcast.
This document discusses reducing heap memory stress in Java applications by using "heap-off memory" techniques. It provides an overview of Java memory fundamentals and the limitations of on-heap caching. It then introduces Apache DirectMemory as an open source project that implements an off-heap caching solution using ByteBuffers to improve performance by reducing garbage collection overhead. Examples of using DirectMemory for multi-layer caching and as a cache server are also presented.
Distributed Caching Using the JCACHE API and ehcache, Including a Case Study ...elliando dias
This document summarizes a presentation on distributed caching using the JCACHE API and ehcache. The presentation covers how to use ehcache to cache web pages, database queries, and configure distributed caching across multiple servers. It also discusses the JSR 107 JCACHE specification and its implementation in ehcache. The presentation concludes with a case study of caching at Wotif.com.
The document provides an introduction and agenda for an HBase presentation. It begins with an overview of HBase and discusses why relational databases are not scalable for big data through examples of a growing website. It then introduces concepts of HBase including its column-oriented design and architecture. The document concludes with hands-on examples of installing HBase and performing basic operations through the HBase shell.
This document discusses experiments conducted to determine the optimal hardware and software configurations for building a cost-efficient Swift object storage cluster with expected performance. It describes testing different configurations for proxy and storage nodes under small and large object upload workloads. The results show that for small object uploads, high-CPU instances performed best for storage nodes while either high-CPU or high-end instances worked well for proxies. For large object uploads, large instances were most cost-effective for storage nodes and high-end instances remained suitable for proxies. The findings provide guidance on right-sizing hardware based on workload characteristics.
This document provides an overview of HBase architecture and advanced usage topics. It discusses course credit requirements, HBase architecture components like storage, write path, read path, files, region splits and more. It also covers advanced topics like secondary indexes, search integration, transactions and bloom filters. The document emphasizes that HBase uses log-structured merge trees for efficient data handling and operates at the disk transfer level rather than disk seek level for performance. It also provides details on various classes involved in write-ahead logging.
This document discusses using memcached for caching data in memory across multiple servers. It introduces ReplCache, which is a Java implementation of memcached that adds redundancy by storing each key on multiple servers. ReplCache improves on memcached by allowing data to be migrated when servers shut down and rebalanced when new servers are added. The document also discusses Infinispan, an open source data grid platform that provides features similar to ReplCache along with additional capabilities such as persistence and querying.
Implementing High Availability Caching with MemcachedGear6
Typical Memcached deployments do not comprehensively address web site requirements for high availability. Depending on your web architecture, a single failure can disable your web caches. This presentation offers real world solutions to solving <a>high availability</a> challenges common to large, dynamic websites with Memcached, specifically:
* Options and benefits for deploying high availability services within Memcached
* How companies are approaching high availability
* Considerations on building and deploying high availability
o Recommendations for a typical Memcached environment
o Open source tools available
o High level costs for deployment
The document discusses several key factors for optimizing HBase performance including:
1. Reads and writes compete for disk, network, and thread resources so they can cause bottlenecks.
2. Memory allocation needs to balance space for memstores, block caching, and Java heap usage.
3. The write-ahead log can be a major bottleneck and increasing its size or number of logs can improve write performance.
4. Flushes and compactions need to be tuned to avoid premature flushes causing "compaction storms".
Building tungsten-clusters-with-postgre sql-hot-standby-and-streaming-replica...Command Prompt., Inc
Alex Alexander & Linas Virbalas
Hot standby and streaming replication will move the needle forward for high availability and scaling for a wide number of applications. Tungsten already supports clustering using warm standby. In this talk we will describe how to build clusters using the new PostgreSQL features and give our report from the trenches.
This talk will cover how hot standby and streaming replication work from a user perspective, then dive into a description of how to use them, taking Tungsten as an example. We'll cover the following issues:
* Configuration of warm standby and streaming replication
* Provisioning new standby instances
* Strategies for balancing reads across primary and standby database
* Managing failover
* Troubleshooting and gotchas
Please join us for an enlightening discussion a set of PostgreSQL features that are interesting to a wide range of PostgreSQL users.
OpenStack is rapidly gaining popularity with businesses as they realize the benefits of a private cloud architecture. This presentation was delivered by Dave Page, Chief Architect, Tools & Installers at EnterpriseDB & PostgreSQL Core Team member during PG Open 2014. He addressed some of the common components of OpenStack deployments, how they can affect Postgres servers, and how users might best utilize some of the features they offer when deploying Postgres, including:
• Different configurations for the Nova compute service
• Use of the Cinder block store
• Virtual networking options with Neutron
• WAL archiving with the Swift object store
The document discusses how Terracotta can be used to create lightweight grids for enterprise applications to improve availability and scalability. It provides an overview of challenges like single points of failure and bottlenecks. Terracotta allows clustering of Java applications across multiple JVMs in a transparent way and reduces the complexity of distributed computing. A simple example demonstrates how Terracotta can be used to distribute a queue across multiple nodes.
Memcached is a high-performance, distributed memory caching system that is used to speed up dynamic web applications by caching objects in memory to reduce database load. It works by storing objects in memory to allow for fast retrieval, improving response times significantly. Major companies that use memcached include Facebook, Yahoo, Amazon, and LiveJournal. It provides features like consistent hashing for object distribution, multithreading, and replication.
PostgreSQL High Availability in a Containerized WorldJignesh Shah
This document discusses high availability for PostgreSQL in a containerized environment. It outlines typical enterprise requirements for high availability including recovery time objectives and recovery point objectives. Shared storage-based high availability is described as well as the advantages and disadvantages of PostgreSQL replication. The use of Linux containers and orchestration tools like Kubernetes and Consul for managing containerized PostgreSQL clusters is also covered. The document advocates for using PostgreSQL replication along with services and self-healing tools to provide highly available and scalable PostgreSQL deployments in modern container environments.
Planning & Best Practice for Microsoft VirtualizationLai Yoong Seng
This document provides best practices for planning and configuring Microsoft virtualization. It discusses guidelines for Hyper-V hosts, virtual machines, SQL Server, Active Directory, Exchange Server, and SharePoint. It recommends using tools like MAP for assessment and following Microsoft's support policies. Key guidelines include processor and memory allocation, storage configuration, network optimization, and redundancy. The document aims to help users understand and apply Microsoft's virtualization best practices.
Rigorous and Multi-tenant HBase PerformanceCloudera, Inc.
The document discusses techniques for rigorously measuring Apache HBase performance in both standalone and multi-tenant environments. It introduces the Yahoo! Cloud Serving Benchmark (YCSB) and best practices for cluster setup, workload generation, data loading, and measurement. These include pre-splitting tables, warming caches, setting target throughput, and using appropriate workload distributions. The document also covers challenges in achieving good multi-tenant performance across HBase, MapReduce and Apache Solr.
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte DataJignesh Shah
This document discusses PostgreSQL performance on multi-core systems with multi-terabyte data. It covers current market trends towards more cores and larger data sizes. Benchmark results show that PostgreSQL scales well on inserts up to a certain number of clients/cores but struggles with OLTP and TPC-E workloads due to lock contention. Issues are identified with sequential scans, index scans, and maintenance tasks like VACUUM as data sizes increase. The document proposes making PostgreSQL utilities and tools able to leverage multiple cores/processes to improve performance on modern hardware.
HBaseCon 2012 | Learning HBase Internals - Lars Hofhansl, SalesforceCloudera, Inc.
The strength of an open source project resides entirely in its developer community; a strong democratic culture of participation and hacking makes for a better piece of software. The key requirement is having developers who are not only willing to contribute, but also knowledgeable about the project’s internal structure and architecture. This session will introduce developers to the core internal architectural concepts of HBase, not just “what” it does from the outside, but “how” it works internally, and “why” it does things a certain way. We’ll walk through key sections of code and discuss key concepts like the MVCC implementation and memstore organization. The goal is to convert serious “HBase Users” into HBase Developer Users”, and give voice to some of the deep knowledge locked in the committers’ heads.
This document discusses techniques for improving latency in HBase. It analyzes the write and read paths, identifying sources of latency such as networking, HDFS flushes, garbage collection, and machine failures. For writes, it finds that single puts can achieve millisecond latency while streaming puts can hide latency spikes. For reads, it notes cache hits are sub-millisecond while cache misses and seeks add latency. GC pauses of 25-100ms are common, and failures hurt locality and require cache rebuilding. The document outlines ongoing work to reduce GC, use off-heap memory, improve compactions and caching to further optimize for low latency.
This document provides an overview of caching and distributed caching principles. It discusses the goals of caching to improve performance by storing frequently accessed data closer to where it is needed. It explains concepts like memory hierarchy and why distributed caching is needed to manage huge amounts of data across multiple servers. Some key use cases of caching are also mentioned. The document discusses caching topologies like partitioned and replicated caching. It provides examples of caching patterns and load techniques. Finally, it discusses some prominent distributed caching solutions and shows sample code for using Hazelcast.
This document discusses reducing heap memory stress in Java applications by using "heap-off memory" techniques. It provides an overview of Java memory fundamentals and the limitations of on-heap caching. It then introduces Apache DirectMemory as an open source project that implements an off-heap caching solution using ByteBuffers to improve performance by reducing garbage collection overhead. Examples of using DirectMemory for multi-layer caching and as a cache server are also presented.
This document discusses Java memory usage on Linux systems and how to monitor and troubleshoot Java applications running on Linux. It covers Java memory structures like heap, non-heap memory and thread stacks. It also discusses Linux memory management and key metrics like resident size. The document provides tips on setting up the JVM, tuning network and OS settings. It recommends tools like jstack, jstat and jcmd for diagnosing issues like high CPU usage, leaks or out of memory errors.
Spring One 2 GX 2014 - CACHING WITH SPRING: ADVANCED TOPICS AND BEST PRACTICESMichael Plöd
Caching is relevant for a wide range of business applications and there is a huge variety of products in the market ranging from easy to adopt local heap based caches to powerful distributed data grids. This talk addresses advanced usage of Spring’s caching abstraction such as integrating a cache provider that is not integrated by the default Spring Package. In addition to that I will also give an overview of the JCache Specification and it’s adoption in the Spring ecosystem. Finally the presentation will also address various best practices for integrating various caching solutions into enterprise grade applications that don’t have the luxury of having „eventual consistency“ as a non-functional requirement.
This document discusses in-memory data grids and JBoss Infinispan. It begins with an overview of in-memory data grids, their uses for caching, performance boosting, scalability, and high availability. It then discusses Infinispan specifically, describing it as an open-source, distributed in-memory key-value data grid and cache. The document outlines Infinispan's architecture, features like persistence, transactions, querying, distributed execution, and map-reduce capabilities. It also provides a case study on using Infinispan for session clustering in a web application.
Running your Java EE 6 applications in the clouds Arun Gupta
Running Java EE 6 applications in the cloud can provide scalability and flexibility. The document discusses deploying Java EE 6 applications on various cloud platforms including Amazon EC2, RightScale, Elastra, and Joyent. It provides an introduction to Java EE 6 and demonstrates running applications on Amazon EC2. It also compares deployment processes and pricing models across cloud vendors.
Running your Java EE 6 applications in the Cloud @ Silicon Valley Code Camp 2010Arun Gupta
Arun Gupta presented on running Java EE 6 applications in the cloud. He discussed Java EE 6 support on various cloud platforms including Amazon, RightScale, Elastra, and Joyent. He also compared features of different cloud vendors and how Java EE can evolve to better support cloud computing. Gupta concluded that Java EE 6 applications can easily be deployed to various clouds and GlassFish provides a feature-rich implementation of Java EE 6.
This document discusses using Memcache to cache data and speed up websites. Memcache stores data in RAM for fast access and can reduce database queries and page load times. It acts as a simple key-value store. The document outlines how to set up Memcache, use the PHP Memcache client to store and retrieve data, and techniques for caching database queries and sessions to improve performance. Memcache is fast but does not provide data redundancy or failover, so caching strategies must account for potential data loss.
VMworld 2013: Extreme Performance Series: Storage in a Flash VMworld
vSphere 5.5 introduces new flash technologies including vSphere Flash Read Cache (vFRC) and Virtual SAN (vSAN) to improve application performance when leveraging flash storage. vFRC provides read caching of VM I/Os on locally connected flash devices to reduce latency. vSAN aggregates flash and HDD resources from multiple hosts to provide a shared datastore with high performance and data protection. Tests showed vFRC improved response times by up to 2x for data warehousing and 39% higher transactions for databases. vSAN delivered performance comparable to all-flash arrays for VDI workloads and scaled linearly with additional hosts.
In today’s systems , the time it takes to bring data to the end-user can be very long, especially under heavy load. An application can often increase performance by using an appropriate caching system. There are many caching level that you can use in our application today : CDN, In-Memory/Local Cache, Distributed Cache, Outut Cache, Browser Cache, Html Cache
The document provides an overview of Azure App Fabric Caching. It discusses that the caching service is a distributed, in-memory application cache that can accelerate Azure applications. It can be used for frequently accessed data caching, ASP.Net session state, and output caching. The document reviews configuration options, usage, session state provider considerations, tracing, understanding quotas and errors, local caching options, and limitations.
Optimizing elastic search on google compute engineBhuvaneshwaran R
If you are running the elastic search clusters on the GCE, then we need to take a look at the Capacity planning, OS level and Elasticsearch level optimization. I have presented this at GDG Delhi on Feb 22,2020.
Dennis van der Stelt will give a presentation on using Velocity, Microsoft's distributed caching platform, for session state management and caching application data across web servers. The presentation will cover challenges for web server farms without caching, an overview of Velocity's architecture and features, different caching strategies in Velocity including partitioned and replicated caches, and best practices for developing a caching strategy and using Velocity.
This document discusses IBM Semeru Runtimes for improving Java workloads in the cloud. It covers:
- Semeru Runtimes are optimized OpenJDK runtimes from IBM that provide smaller container images, faster startup times, and more efficient resource utilization for Java applications in cloud environments.
- Techniques used include using smaller base images, ahead-of-time compilation, shared class caching, and container awareness to size memory usage appropriately.
- Semeru Runtimes also provide tools for live debugging of containerized Java applications without restarting containers.
- Upcoming features will include support for the new cgroups version 2 for more advanced Linux container resource management.
Developing High Performance and Scalable ColdFusion Application Using Terraco...ColdFusionConference
This presentation discusses using Terracotta Ehcache to scale ColdFusion applications. It covers caching basics and options like on-heap, off-heap, and distributed caching. Attendees will learn how to configure Ehcache and Terracotta to enable distributed caching for ColdFusion to improve performance and scalability. Real-world customer examples are provided that demonstrate how Terracotta Ehcache helped online payment processors detect fraud faster and assisted Healthcare.gov in reducing response times.
Developing High Performance and Scalable ColdFusion Applications Using Terrac...Shailendra Prasad
1. How to scale – options (pros and cons)
2. Caching basics (various options available)
3. Recent updates of Open source Ehcache project.
4. Scaling your existing application with Ehcache, Terracotta OSS
5. Advance caching techniques for scaling using Terracotta BigMemory
6. Customer use cases where caching was mission critical
This document provides an overview of Oracle Coherence, an in-memory data grid. It discusses what a data grid is and how Coherence works, including clustering, caching, querying, and aggregating data. It also provides examples of how Coherence can be used and customer use cases, such as for user session management across brands.
Information on how PHP developers can implement data caching to improve performance and scalability. Presented at the West Suburban Chicago PHP Meetup on February 7, 2008.
Similar to Jug Lugano - Scale over the limits (20)
1. Scale over the limits:
an overview of modern distributed caching solutions
Davide Carnevali – Lorenzo Acerbo
Talk duration 50”
Talk slides 40# JUG Lugano – Lugano (CH), October 5th 2010
2. What, Why and When about distributed caching?
What
Fast and efficient memory to store frequently used data, shared by several machines.
Why
• leverage database load
• increment cluster tolerance to failures
• enable horizontal scalability
When
• several machines need the same data
• computation is spread throughout many individual nodes
Page 2
3. How does a local cache work?
Distinct JVMs have
their own cache, and
access database
independently
x x ?
x x ?
X(C1) == X(C2) == X(C3) Database x
Page 3
4. How does a replicated cache work?
Distinct JVMs have
their own cache, but
share the same data
x x x
x x x
C
Heap-based data is
X(C1) == X(C2) == X(C3) Database x accessible by any JVM
Page 4
5. How does a distributed cache work?
Distinct JVMs see
caches and data as
their own
x x x
x
C
Heap-based data is
accessible by any JVM
Database x
Page 5
6. Replicated VS Distributed
Replicated Mode
Pro Cons
Best choice for small clusters Does not scale in terms of memory
All data are in local memory Limited to the heap of a single JVM
(read high performance)
Distributed Mode
Pro Cons
Scale (almost) linearly Not all data is in local memory
No memory limit Higher network traffic
Resilience to server failure Performance lost on
serialization/deserialization
Page 6
7. Caching strategy – Write Behind with distributed mode
When a application puts (writes) the informations in the distributed cache can be used two different mechanisms for
syncronize the shared memory and write the data on resource (such as database).
The write behind strategy, also know as asynchronous strategy, means that updates to the cache store are done by a
separate thread to the client thread interacting with the cache.
put() store()
A
A A
JVM 1 JVM 3
Database
A
JVM 2 JVM 4
Page 7
8. Some JAVA fameworks
Many JAVA open source frameworks are available to create your distributed cache.
We discover some the solutions with major features and with a good open source communities.
Name Company / Community License Link
EHCache 2.x Terracotta Apache License 2.0 http://ehcache.org/
Infinispan 4.0 FINAL Jboss-Red Hat LGPL 2.1 http://jboss.org/infinispan
HazelCast 1.9 Hazelcast Apache License 2.0 http://www.hazelcast.com
MemCached [Server] BSD License http://memcached.org/
-
Xmemcached [Client] Apache License 2.0 http://code.google.com/p/xmemcached/
Terracotta Server Terracotta Commercial http://terracotta.org/
Page 8
9. EHCache Overview
• EHCache require JAVA 1.5 or 1.6 runtime
• Standards based of JSR 107 API
• Replicated caching via Jgroups TCP / IP, RMI or JMS
• Transactional support through JTA
• Dinamically Modifying Cache Configuration (at Runtime) : Cache Manager is a singleton.
• Fast integration with ORM such as Hibernate
Two interesting decorator Cache manager
• UnlockedReadsView
Normally a read lock must first be obtained to read data from backed. If there is an outstanding write lock, the read
lock queues up. This is done so that the happens before guarantee can be made.
If the business logic is happy to read stale data even if a write lock has been acquired in preparation for changing
it, then much higher speeds can be obtained.
• NonStopCache :
Provides SLA level control features for your cache. Automatically respond to cluster topology events to take a pre-
configured action.
• You're using a write-through cache and your DB hangs. Use your Non-stop cache decorator to keep it from
hanging your entire Application Server.
• You have one cache that is accessed for multiple functions. For some of those functions you want operations
to timeout after 5 seconds and for others you want 20 seconds. You can have multiple decorators on the
same cache with the different semantics defined.
Page 9
11. Beta
Enterprise EHCache BigMemory
The distributed cache with MemoryStore use many heap memory, and you need to increase the heap-
size but the “Garbage collection (GC) is like a ticking time bomb for Java”.
To minimize the garbage collection penalty, organizations often limit heap size to 2–4 GBs. This
constrains cache size, limiting the performance benefits that can be achieved by caching.
BigMemory use a “off-heap” Memory:
• Limited only by the amount of RAM
on your hardware and address space
• It’s very recommend 64 bit OS.
Off-heap data is stored in bytes, there are two implications:
• Only Serializable cache keys and values
can be placed in the store
• Serialization and deserialization take place on
putting and getting from the store. This means
that the off-heap store is slower in an absolute sense.
Page 11
12. Beta
Enterprise EHCache BigMemory
Configuration (with EHCache)
overflowToOffHeap : true | false for enables the off-heap memory
maxMemoryOffHeap : Sets the amount of off-heap memory available to the cache
maxElementsInMemory : Max elements in off-heap, reccomend at least 100 elements.
maxOffHeapValueSize : Max dimension of size (in MB) for each objects. Default is 4Mb
DoNotHaltOnCriticalAllocationDelay : If the memory use is dramatically overallocated for at least 3 seconds (1GB),
the application call a System.exit(1), with this properties you force to wait.
diskPersistent : true | false, for store in async way on Disk Store, for handling JVM shutdown.
diskSpoolBufferSizeMB : The max size of disk store buffer
… BigMemory is built-in with
Terracotta Server Arrays.
Page 12
13. Terracotta Scalability Plattform
Distributed Shared Objects
• www.terracotta.org
• Open source, by Terracotta Inc.
• Terracotta acquired EHCache and Quartz and provides integration plugins
Page 13
14. Terracotta Scalability Plattform
How Terracotta works
• Data distribution and synchronization, through Terracotta server arrays
• Bytecode waving to leverage application code from distribution/synchronization
details
• Client/Server approach: clients are applications modified with use of AOP, Server
maintain application state, with redundancy for fault tolerance
• Application state resides in Terracotta Server
• Application use object data as local in-memory, TC replicates changes
Page 14
15. Terracotta Scalability Plattform
Terracotta standard application layout
AS 1 AS 2 AS 3
Business Business Business
Logic Logic Logic
TC Libraries TC Libraries TC Libraries
Database TC Server
TC Server
Backup
Page 15
16. Terracotta Scalability Plattform
JVM 1 JVM 2
Heap Heap
Business Logic Business Logic
3 3
1 1
2 2
1 1
2 … 2 …
4 7 4 7
7 7
… …
3 4 3 4
5 5 5 5
6 6
… …
6 6
TC libraries … TC libraries …
Terracotta replicates delta modification
client make transmitted to
modifications to other 5
modification clients other nodes
to object
TC Server
Page 16
17. Infinispan - Advanced Datagrid Platform (1/8)
• www.infinispan.org
• Open source, based on JBoss Cache
• Current version is 4.1.0 FINAL
• LGPL License
• Developed and supported by RedHat and JBoss community
• Entirely written in Java, works on Java 6 machines
Page 17
18. Infinispan - Advanced Datagrid Platform (2/8)
What does Infinispan provide?
• Cache memory for distributed environments
• Supports three modes: Local, Replicated and Distributed
• Network communication based on JGroups
a) TCP / UDP network protocol
b) Multicast / Unicast
c) Auto discovery of cluster members
d) State recovery upon cluster partitioning
• Transaction support (JTA compliant)
• Eviction algorithms to control memory usage
• Persisting state to configurable cache stores
Page 18
19. Infinispan - Advanced Datagrid Platform (3/8)
Abstraction: distributed key/value Map
Same interface, same semantics
Map<String, Object> myMap = new HashMap<String, Object>();
myMap.put("id01", myObject);
MyObject x = (MyObject) myMap.get("id01");
assert myMap.size() == 1;
CacheManager cacheManager = new DefaultCacheManager();
Cache cache = cacheManager.getDefaultCache();
cache.put("id01", myObject);
MyObject x = (MyObject) cache.get("id01");
assert cache.size() == 1;
Easy to extend features and capabilities to existent code
Page 19
20. Infinispan - Advanced Datagrid Platform (4/8)
Data is saved in backup copies for fault tolerance
Infinispan node 1
DS1 ds3 Infinispan node 3
DS3 ds2
Infinispan node 2
DS2 ds1
Page 20
21. Infinispan - Advanced Datagrid Platform (5/8)
Common issues in a distributed environment
• Where to put objects in a cluster? Is there a “right node”?
• Uniform distribution across al nodes
• Where to look for a key? Multicast? Metadata? Routing?
• What if a node crashes?
• What if a new node joins the cluster?
Consistent Hashing
• A deterministic algorithm that maps keys to nodes
• No look up, no multicast, no waste of network traffic
• Based on distance, all keys maps to the nearest bucket
• Cluster changes involve only a small subset of data transfer
• When a node joins, it takes some of its neighborhood keys
• When a node leaves its data is spread among its neighborhoods
Page 21
22. Infinispan - Advanced Datagrid Platform (6/8)
Chord: a consistent hashing algorithm
e e e e
d d d d
a a a a
c c c c
b b b b
Hash function maps data to an integer 0…N
Keys and nodes can be seen as points on the edge of a circle
A key belongs to the clockwise next node
When a new node enters, part of the keys are reassigned
When a node crashes its data goes to the next-in-line node
Page 22
23. Infinispan - Advanced Datagrid Platform (7/8)
Peer-to-Peer vs Client/Server
User User
C++ Java
App App
Load balancer
Client Client
App App App
Server Server
Embedded Embedded Embedded Infinispan Infinispan
Infinispan Infinispan Infinispan
App Server 1 App Server 2 App Server 3 App Server 1 App Server 2
Clustered Clustered
Page 23
24. Infinispan - Advanced Datagrid Platform (8/8)
Protocol comparison
Protocol Type Client Availability Clustered Smart Load Balancing / Fail
Routing over
REST Text Tons Yes No Any Http Load
Balancer
Memcached
In server mode support Yes
Text Tons
different No Only with
protocols, can be used with different predefined list of
(non-JVM) languages server
Hot Rod Binary Right now, Yes Yes Yes, dynamic via
only Java Hot Rod client
Web Socket Text Javascript only Yes No Any Http Load
Balancer
Taken from http://community.jboss.org/wiki/InfinispanServerModules
Page 24
25. Hazelcast - In-memory datagrid computing (1/4)
• www.hazelcast.com
• Open source, by Hazel Bilisim Ltd
• Current version is 1.9
• Apache License 2.0
• Hosted on google code @ http://code.google.com/p/hazelcast/
• Entirely written in Java, works on Java 6 machines
Page 25
26. Hazelcast - In-memory datagrid computing (2/4)
What does Hazelcast provide?
• Specifically targeted for distributed environment, works only in distributed mode
• Distributed API for Lists, Queues, (Multi)Maps, Sets
• Not only data: Locks, Tasks execution, Events and Messages
• Ad hoc network communication with auto discovery and monitoring tools
• Easy configuration, easy to use. Hides major aspects of distribution
• Configurable number of backups for replication and fault-tolerance
• Concurrency, Transaction support, state persistency
• Peer to peer communication only, with super clients (nodes without data)
Page 26
27. Hazelcast - In-memory datagrid computing (3/4)
More than data = distributed computing
Distributed Object Queries
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.Imap;
import com.hazelcast.query.SqlPredicate;
import java.util.Collection;
IMap map = Hazelcast.getMap("employees");
map.addIndex("active", false);
map.addIndex("name", false);
map.addIndex("age", true);
Collection<Employee> employees = map.values(new SqlPredicate("active AND age <= 30"));
Objects that satisfy criteria are efficiently fetched from the cluster and return by reference
in a collection
Page 27
28. Hazelcast - In-memory datagrid computing (4/4)
More than data = distributed computing
Distributed Task Execution
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.FutureTask;
import com.hazelcast.core.DistributedTask;
import com.hazelcast.core.Hazelcast;
FutureTask<String> task = new DistributedTask<String>(new Callable<String>() {
@Override
public String call() throws Exception {
String result = null;
// Do something useful here
return result;
}
});
ExecutorService executorService = Hazelcast.getExecutorService();
executorService.execute(task);
String result = task.get();
Execution is sent over the cluster
Page 28
29. Memcached Overview
• http://memcached.org/
• Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects)
• Memcached server is wrote in C/C++, and It’s a daemon service installable on different O.S.
• Its API is available for most popular languages.
• It’s designed to take advantage of free memory.
Java Client xmemcached : http://code.google.com/p/xmemcached/
Page 29
30. Memcached & Xmemcached Java client
Telnet Interface Java Client Interface (xmemcached)
> telnet localhost 11211 MemcachedClientBuilder builder = new
XMemcachedClientBuilder(
> set key 0 900 13 AddrUtil.getAddresses("localhost:11211"));
> data_to_store
MemcachedClient memcachedClient =
STORED
builder.build();
> get key memcachedClient.add("key", 0,”Hello”);
VALUE key 0 13
data_to_store memcachedClient.get("key");
END
memcachedClient.shutdown();
• Supports connection pool.You can create more connections to one memcached server with java.nio.*
• Dynamically add/remove server
• Data compression (because Memcached is inefficient when you store large data)
• Fast Integration with Hibernate-memcached
Page 30
31. Memcached – Xmemcached Example
Weighted Server
MemcachedClientBuilder builder = new
XMemcachedClientBuilder(AddrUtil.getAddresses("localhost:12000 localhost:12001"),
new int[]{1,3});
MemcachedClient memcachedClient=builder.build();
You can change the weight dynamically through JMX
public interface XMemcachedClientMBean {
public void setServerWeight(String server, int weight);
}
XMemcached can adjust weight of node for balance the load of memcached server, the weight is more high,
the memcached server will store more data, and receive more load.
Use counter to increment / decrement
...
MemcachedClient memcachedClient=builder.build();
Counter counter= memcachedClient.getCounter("counter",0);
counter.incrementAndGet();
counter.decrementAndGet();
counter.addAndGet(-10);
You can use MemcachedClient's incr/decr methods to increase or decrease counter,but xmemcached has a
counter which encapsulate the incr/decr methods,you can use counter just like AtomicLong:
Page 31
32. Memcached & MySQL (multiple Mechached servers and a stand-AloneMySQLServers)
MySQL Enterprise included memcached for a cluster configurations.
There are various ways to design scalable architectures using memcached and MySQL
Page 32
33. Memcached & MySQL (Multiple Memcached Servers with a Master and multiple Slave MySQL)
This architecture allows for scaling a read intensive application.
Page 33
34. Memcached & MySQL (Sharding, multiple Memcached Servers with a Master and multiple Slave MySQL)
With sharding (application partitioning) we partition data across multiple physical servers to gain read and
write scalability.
Page 34
35. Performance Reports
Items 100.000 Objects : String name, String surname, int age, String description
Configuration 2 node on same physical machine (localhost)
Hardware Intel Core2 Duo CPU - P8400 @2.26 Ghz – 3.48 GB Ram
35
30
25
20 Put()
15 Get()
10
5
0
EHCache 2.2 Infinispan HazelCast Xmemcached
(*) Before make a choose is really important execute realistic tests in your environment, with different cluster
Page 35 size, network bandwidth, different concurrent access and application type:
web application e-commerce like, data intensive processing, near real time apps…
36. CAP Theorem and Cache coherency
It states, that though its desirable to have Consistency, High-Availability and Partition-tolerance in every
system, unfortunately no system can achieve all three at the same time.
This is true also for Distributed cache solutions.
Consistency : all nodes see the same data at the same time.
Availability : node failures do not prevent survivors from continuing to operate.
Partition Tolerance : Risk of data partition; the system continues to operate despite arbitrary message loss.
Read and Write Through / Sync Mode:
Cache Mode => Replicated Distributed
Consistency Yes Yes
Availability Yes No
Partition Torelance No Yes
Write behind / ASync Mode:
Cache Mode => Replicated Distributed
Consistency No No
Availability Yes No
Partition Torelance No Yes
Page 36
37. Conclusion
• Distributed cache are interesting technology, but come at a cost
• There is no "perfect solution", every choice must be evaluated
• Work greatly on mostly read access data
• In-memory state more difficult to monitor than traditional solutions
• Replication best fit on small size cluster
• Big environment needs actual data distribution
Page 37
38. Reference 1/2
Official documentation
• EHCache & Terracotta : http://ehcache.org/documentation/index.html
• Terracotta : http://www.terracotta.org/platform/
• Infinispan : http://jboss.org/infinispan/docs.html
• HazelCast : http://www.hazelcast.com/documentation.jsp
• Memcached : http://memcached.org/
• Xmemcached : http://code.google.com/p/xmemcached/
Articles & Blogs
• Intro to Caching,Caching algorithms and caching frameworks part 4 :
http://javalandscape.blogspot.com/2009/03/intro-to-cachingcaching-algorithms-and.html
• Comparison of the Grid/Cloud Computing Frameworks (Hadoop, GridGain, Hazelcast, DAC) - Part II
http://java.dzone.com/articles/comparison-gridcloud-computing-0
• Brewers CAP Theorem on distributed systems
http://www.hazelcast.com/documentation.jsp
• Ehcache - A Java Distributed Cache
http://highscalability.com/ehcache-java-distributed-cache
• A Matter of Scale: The CAP Theorem and Memory Models
http://coverclock.blogspot.com/2010/05/matter-of-scale-cap-theorem-and-memory.html
• Consistent Hashing
http://www.lexemetech.com/2007/11/consistent-hashing.html
• A Couple Minutes With Non-Stop Ehcache
http://dsoguy.blogspot.com/2010/05/couple-minutes-with-non-stop-ehcache_07.html
• Designing and Implementing Scalable Applications with Memcached and MySQL
http://www.mysql.com/why-mysql/white-papers/mysql_wp_memcached.php
Page 38
39. Reference 2/2
Presentations
• Shopzilla On Concurrency : http://www.slideshare.net/WillGage/shopzilla-on-concurrency-3872625
• Scaling your cache : http://www.slideshare.net/alexmiller/scaling-your-cache
• Caching in Distributed Enviroment : http://www.slideshare.net/abhigad/7564192
• Infinispan by Jteam : http://www.jteam.nl/specials/techtalks/011110/attachment/Infinispan.pdf
Wikipedia
• Brewer’s theorem : http://en.wikipedia.org/wiki/CAP_theorem
• Performance tuning : http://en.wikipedia.org/wiki/Performance_tuning
Page 39
40. Q&A and Thanks
Davide Carnevali Lorenzo Acerbo
Email : davide.carnevali at gmail.com Email : lorenzo.acerbo at gmail.com
Skype : davide.carnevali Skype : lorenzo.acerbo
Page 40