The document discusses Ristretto, a high performance concurrent cache for Go. It aims to have fast access times, high concurrency and resistance to contention, while bounding memory usage. It uses TinyLFU for admission control and sampled LFU for eviction. Stores are sharded across mutex-wrapped maps for concurrency. Hashing uses memhash for speed. Gets and Sets use lossy ring buffers and channels to balance performance and scalability across goroutines. Metrics avoid false sharing overhead. Usage involves creating a cache, setting and getting values, and deleting values.
7. Requirement
4 Concurrent
4 High cache-hit ratio
4 Memory-bounded (limit to configurable max memory
usage)
4 Scale well as the number of cores and goroutines
increases
2019/10/14 7
11. Key Cost
4 An infinitely large cache is practically impossible
4 Attach a cost to every key-value
4 Heavy item could displace many lightweight items
2019/10/14 11
28. Set
4 channel
select {
case c.setBuf <- &item{key: hash, val: val, cost: cost}:
return true
default:
// drop the set and avoid blocking
c.stats.Add(dropSets, hash, 1)
return false
}
2019/10/14 28
29. Metrics
4 prevent False Sharing
4 10% overhead
Add:
valp := p.all[t]
// Avoid false sharing by padding at least 64 bytes of space between two
// atomic counters which would be incremented.
idx := (hash % 25) * 10
atomic.AddUint64(valp[idx], delta)
Read:
valp := p.all[t]
var total uint64
for i := range valp {
total += atomic.LoadUint64(valp[i])
}
return total
2019/10/14 29
30. How to
func main() {
cache, err := ristretto.NewCache(&ristretto.Config{
NumCounters: 1e7, // Num keys to track frequency of (10M).
MaxCost: 1 << 30, // Maximum cost of cache (1GB).
BufferItems: 64, // Number of keys per Get buffer.
})
if err != nil {
panic(err)
}
cache.Set("key", "value", 1) // set a value
// wait for value to pass through buffers
time.Sleep(10 * time.Millisecond)
value, found := cache.Get("key")
if !found {
panic("missing value")
}
fmt.Println(value)
cache.Del("key")
}
2019/10/14 30