Cache Aside
Siva Rama Krishna
Cloud Design
Patterns Series
@sivachunduru
linkedin.com/in/chunduru
slideshare.net/sivachunduru
2
$> who am i
Solution Specialist
Oracle Cloud Platform
3
Motive of a pattern
Optimal use of resources Improvement in application
performance
Better Resiliency quotient
Context
• To deal the repeated access of
information in data store.
Challenge
• To ensure the cached data is always
consistent with the data in the data
store.
4
Background of Cache
CACHE
Background of Cache
• Application(s) strategy should,
– ensure that the data in the cache is as up-to-date as possible
– detect and handle situations that arise when the data in the cache has become stale
5
Cache – common access patterns
• Cache-as-DS**
–Read-through
–Write-through
–Write-behind
6
Many of commercial
caching systems
provide these patterns
out of the box
** DS = Data Store
Cache is the primary system of record
Cache-as-DS
• Read-through
– The cache is configured with a loader piece that knows how to load data from DS
– If the requested data doesn’t exist in cache, loader components loads from DS
• Write-through
– The cache is configured with a writer piece that knows how to write data to DS
– If the cache is asked to store a value for a key, the cache invokes the writer to store
the value in the DS, as well as updating the cache
• Write-behind
– This changes the timing of the write to the DS
– Basically, it queues the data for writing at a later time
7
What if the cache system doesn’t provide?
8
It’s the responsibility
of the application
that use the cache
to manage the data
….means, application code
uses the Cache
directly
So, the need of Cache Aside
• An application must emulate the functionality of
– Reading values
– Writing values
• This strategy loads or evicts the data into the cache on demand.
• One can choose Pre-load of data or LAYGO (decide on case by case)
• Check if the item exists in the cache
Cache Aside – Reading illustration
Cache
Data Store
???
• If doesn’t exists, read the item from
data store
• Store a copy of the item in the
cache
AppWhen Application reads the item,
value = cache.get (key)
if (value == null) {
value = datasource.get (key)
cache.put (key, value)
}
Cache Aside – Reading – Pseudo code
• Make the change to data store
Cache Aside – Writing illustration
Cache
Data Store
X
• Invalidate the item in the cache
When Application updates the item, App
• For the next READ of the item,
application uses Reading pattern
datasource.put (key, value)
cache.put (key, value)
value = newValue
datasource.put (key, value)
cache.invalidate (key)
Cache Aside – Writing – Pseudo code
Issues & Considerations
16
Life time of cached data
17
Don't make the
expiration period
too short
Don't make the
expiration period
so long
It cause applications to
regularly retrieve data
from DS and add to cache
It cause the cached data
likely to become stale
Caching is most effective for relatively static data or data that is read frequently
• Expiration Policy - to invalidate and remove inactive data from cache
• To be effective, the expiration policy must match the pattern of data access
Least-recently-used policy Global expiration policy with customizations
Evicting data
• Cache size is mostly lesser size than DS size – Data eviction is obvious
• Various eviction policies –
18
It isn't always appropriate to apply a global eviction policy to every item
Ex:- if a cached item is very expensive to retrieve from the DS, it is better to keep that
item in the cache at the expense of more frequently accessed but less costly items
Priming the cache
• Many solutions pre-populate the cache with the data that an application is
likely to need as part of the startup processing.
19
Data
Store
CACHE
Cache-Aside pattern can still be useful if some of this data expires or is evicted
Consistency
• This pattern doesn't guarantee consistency between the DS & the cache
20
Org.
Boundary
Data
Store
External
Process
update (value)
Cache
X
X

Data
Store
Local (in-memory) caching
21
• A cache could be local to an application instance and stored in-memory.
• Cache-Aside can be useful in this environment if an application repeatedly
accesses the same data.
Local Cache is
private; each app
instances have a copy
of same data
This data can quickly
become inconsistent
between caches;
expire and refresh
frequently
Consider investigating the use of a shared or distributed caching mechanism
Applicability of Cache-Aside pattern
22
Suitable when
• A cache doesn't provide native read-through and write-through operations.
• Resource demand is unpredictable. This pattern enables applications to
load data on demand.
• It makes no assumptions about which data an application will require in
advance.
23
Not Suitable when
• The cached data set is static ONLY. Use the CDN, if so.
• Caching session state information in a web application hosted in a web
farm. In this environment, you should avoid introducing dependencies
based on client-server affinity.
24
Example implementations
Coherence
25
Read-thru
26
Write-thru
27
Write-behind
28
Example implementations
Spring
29
• When getRecordForSearch is called,
it will be fetched automatically
from the cache if it is available (it
will not hit the method code at all).
• Otherwise, it will be retrieved from
the data store.
Read-thru in Spring
Read-thru in Spring
31
• When a method to update the data
in the data store is called, it also
invalidates the cache entry using
the key.
• In this approach, the cache entry
gets loaded only when it is re-
requested after the update.
• Cache entries can also be updated
if and when the data is updated in
the data store to get faster data
retrieval and consistency in one go.
32
Write-thru in Spring
33
Write-thru in Spring
Example implementations
Redis Cache
34
Redis - Overview
• Redis is an
– open-source
– in-memory key-value data store
• Redis is used as
– a database
– cache
– message broker
• In terms of implementation, Key-Value stores represent one of the largest
and oldest members in the NoSQL space
• Redis supports data structures such as strings, hashes, lists, sets, and
sorted sets with range queries.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
36
SpringBoot – Redis integration
37
DEMO
Pull Redis docker image
39
Create Redis configuration and start container
40
Explore and Run SpringBoot application
41
Retrieve the record that qualifies for caching
• Out of 5 requests, 1st request was
served from DS and rest of 4
requests were served from Cache
42
Check in cache about a new entry
43
Retrieve the record that doesn’t qualify for caching
• Optimal usage of
resources
• Improve the application
performance
• Resiliency in application
behavior
Summary - Motive of the pattern
• Optimal Caching • Optimal READ Operations • Pre-load data (or) LAYGO
45

Cache-Aside Cloud Design Pattern

  • 1.
    Cache Aside Siva RamaKrishna Cloud Design Patterns Series
  • 2.
  • 3.
    3 Motive of apattern Optimal use of resources Improvement in application performance Better Resiliency quotient
  • 4.
    Context • To dealthe repeated access of information in data store. Challenge • To ensure the cached data is always consistent with the data in the data store. 4 Background of Cache CACHE
  • 5.
    Background of Cache •Application(s) strategy should, – ensure that the data in the cache is as up-to-date as possible – detect and handle situations that arise when the data in the cache has become stale 5
  • 6.
    Cache – commonaccess patterns • Cache-as-DS** –Read-through –Write-through –Write-behind 6 Many of commercial caching systems provide these patterns out of the box ** DS = Data Store Cache is the primary system of record
  • 7.
    Cache-as-DS • Read-through – Thecache is configured with a loader piece that knows how to load data from DS – If the requested data doesn’t exist in cache, loader components loads from DS • Write-through – The cache is configured with a writer piece that knows how to write data to DS – If the cache is asked to store a value for a key, the cache invokes the writer to store the value in the DS, as well as updating the cache • Write-behind – This changes the timing of the write to the DS – Basically, it queues the data for writing at a later time 7
  • 8.
    What if thecache system doesn’t provide? 8
  • 9.
    It’s the responsibility ofthe application that use the cache to manage the data
  • 10.
  • 11.
    So, the needof Cache Aside • An application must emulate the functionality of – Reading values – Writing values • This strategy loads or evicts the data into the cache on demand. • One can choose Pre-load of data or LAYGO (decide on case by case)
  • 12.
    • Check ifthe item exists in the cache Cache Aside – Reading illustration Cache Data Store ??? • If doesn’t exists, read the item from data store • Store a copy of the item in the cache AppWhen Application reads the item,
  • 13.
    value = cache.get(key) if (value == null) { value = datasource.get (key) cache.put (key, value) } Cache Aside – Reading – Pseudo code
  • 14.
    • Make thechange to data store Cache Aside – Writing illustration Cache Data Store X • Invalidate the item in the cache When Application updates the item, App • For the next READ of the item, application uses Reading pattern
  • 15.
    datasource.put (key, value) cache.put(key, value) value = newValue datasource.put (key, value) cache.invalidate (key) Cache Aside – Writing – Pseudo code
  • 16.
  • 17.
    Life time ofcached data 17 Don't make the expiration period too short Don't make the expiration period so long It cause applications to regularly retrieve data from DS and add to cache It cause the cached data likely to become stale Caching is most effective for relatively static data or data that is read frequently • Expiration Policy - to invalidate and remove inactive data from cache • To be effective, the expiration policy must match the pattern of data access
  • 18.
    Least-recently-used policy Globalexpiration policy with customizations Evicting data • Cache size is mostly lesser size than DS size – Data eviction is obvious • Various eviction policies – 18 It isn't always appropriate to apply a global eviction policy to every item Ex:- if a cached item is very expensive to retrieve from the DS, it is better to keep that item in the cache at the expense of more frequently accessed but less costly items
  • 19.
    Priming the cache •Many solutions pre-populate the cache with the data that an application is likely to need as part of the startup processing. 19 Data Store CACHE Cache-Aside pattern can still be useful if some of this data expires or is evicted
  • 20.
    Consistency • This patterndoesn't guarantee consistency between the DS & the cache 20 Org. Boundary Data Store External Process update (value) Cache X X  Data Store
  • 21.
    Local (in-memory) caching 21 •A cache could be local to an application instance and stored in-memory. • Cache-Aside can be useful in this environment if an application repeatedly accesses the same data. Local Cache is private; each app instances have a copy of same data This data can quickly become inconsistent between caches; expire and refresh frequently Consider investigating the use of a shared or distributed caching mechanism
  • 22.
  • 23.
    Suitable when • Acache doesn't provide native read-through and write-through operations. • Resource demand is unpredictable. This pattern enables applications to load data on demand. • It makes no assumptions about which data an application will require in advance. 23
  • 24.
    Not Suitable when •The cached data set is static ONLY. Use the CDN, if so. • Caching session state information in a web application hosted in a web farm. In this environment, you should avoid introducing dependencies based on client-server affinity. 24
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
    • When getRecordForSearchis called, it will be fetched automatically from the cache if it is available (it will not hit the method code at all). • Otherwise, it will be retrieved from the data store. Read-thru in Spring
  • 31.
  • 32.
    • When amethod to update the data in the data store is called, it also invalidates the cache entry using the key. • In this approach, the cache entry gets loaded only when it is re- requested after the update. • Cache entries can also be updated if and when the data is updated in the data store to get faster data retrieval and consistency in one go. 32 Write-thru in Spring
  • 33.
  • 34.
  • 35.
    Redis - Overview •Redis is an – open-source – in-memory key-value data store • Redis is used as – a database – cache – message broker • In terms of implementation, Key-Value stores represent one of the largest and oldest members in the NoSQL space • Redis supports data structures such as strings, hashes, lists, sets, and sorted sets with range queries.
  • 36.
  • 37.
  • 38.
  • 39.
    39 Create Redis configurationand start container
  • 40.
    40 Explore and RunSpringBoot application
  • 41.
    41 Retrieve the recordthat qualifies for caching
  • 42.
    • Out of5 requests, 1st request was served from DS and rest of 4 requests were served from Cache 42 Check in cache about a new entry
  • 43.
    43 Retrieve the recordthat doesn’t qualify for caching
  • 44.
    • Optimal usageof resources • Improve the application performance • Resiliency in application behavior Summary - Motive of the pattern • Optimal Caching • Optimal READ Operations • Pre-load data (or) LAYGO
  • 45.

Editor's Notes

  • #4 Resiliency  Do you bend or break?
  • #15 If an application updates information, it can follow the write-through strategy by making the modification to the data store, and by invalidating the corresponding item in the cache.
  • #17 What points are to be considered when pursuing this pattern ?
  • #21 An item in the data store can be changed at any time by an external process, and this change might not be reflected in the cache until the next time the item is loaded. In a system that replicates data across data stores, this problem can become serious if synchronization occurs frequently.
  • #30 The Spring Framework provides support for transparently adding caching to an application. At its core, the abstraction applies caching to methods, reducing thus the number of executions based on the information available in the cache. The caching logic is applied transparently, without any interference to the invoker. Spring Framework provides an abstraction layer with set of annotations for caching support and can work together with various cache implementation like Redis, EhCache, Hazelcast, Infinispan and many more.
  • #31 @Cacheable — Fulfill cache after method execution, next invocation with the same arguments will be omitted and result will be loaded from cache. Annotation provide useful feature called conditional caching. In some case no all data should be cached e.g. you want store in memory only basis of search keyword.
  • #33 @CacheEvict — Remove entry from cache, can be both conditional and global to all entries in specific cache. @CachePut — Annotation allows to update entry in cache and support same options like Cacheable annotation. Code below updates post and return it for cache provider to change entry with new value.
  • #35 Redis is popular open source in-memory data store used as a database, message broker and cache, for now only last use-case is important for us. 
  • #45 LAYGO => Load As You GO