Blake Crosby Julian Dunn Media Operations and Technology CBC/Radio-Canada Cache Optimization & Origin Infrastructure Reduction Using Akamai Site Delivery
The document discusses how CBC/Radio-Canada optimized caching and reduced origin infrastructure using Akamai site delivery. Some of the key strategies used were setting default blanket caching rules, heavy leveraging of conditional GET requests, categorizing content into different TTL buckets, and enabling last mile acceleration. These techniques helped reduce origin costs and footprint while still maintaining a high level of content freshness and performance.
This document summarizes six techniques for optimizing cache performance by reducing the average memory access time. The techniques are organized into three categories: reducing miss rate through larger block/cache sizes and higher associativity, reducing miss penalty through multilevel caches and prioritizing reads over writes, and reducing hit time by avoiding address translation during cache indexing. Prioritizing reads allows read misses to continue while writes are buffered, checking for conflicts. Avoiding translation during indexing involves using the page offset to directly index the cache set or indexing the cache virtually with limitations like page coloring to prevent aliasing.
This document discusses memory hierarchy design and optimization. It describes the different levels of memory from fastest to slowest as registers, cache, main memory, disk storage, and back storage. The goal of the memory hierarchy is to provide fast memory access at a low cost per byte. It explains principles like locality of reference, cache block structure, hit/miss rates, and causes of cache misses. Basic optimizations discussed include increasing block size, cache size, associativity, and using multiple cache levels to reduce miss rates and penalties. Address translation through virtual memory and the translation lookaside buffer are also summarized.
Blake Crosby Julian Dunn Media Operations and Technology CBC/Radio-Canada Cache Optimization & Origin Infrastructure Reduction Using Akamai Site Delivery
The document discusses how CBC/Radio-Canada optimized caching and reduced origin infrastructure using Akamai site delivery. Some of the key strategies used were setting default blanket caching rules, heavy leveraging of conditional GET requests, categorizing content into different TTL buckets, and enabling last mile acceleration. These techniques helped reduce origin costs and footprint while still maintaining a high level of content freshness and performance.
This document summarizes six techniques for optimizing cache performance by reducing the average memory access time. The techniques are organized into three categories: reducing miss rate through larger block/cache sizes and higher associativity, reducing miss penalty through multilevel caches and prioritizing reads over writes, and reducing hit time by avoiding address translation during cache indexing. Prioritizing reads allows read misses to continue while writes are buffered, checking for conflicts. Avoiding translation during indexing involves using the page offset to directly index the cache set or indexing the cache virtually with limitations like page coloring to prevent aliasing.
This document discusses memory hierarchy design and optimization. It describes the different levels of memory from fastest to slowest as registers, cache, main memory, disk storage, and back storage. The goal of the memory hierarchy is to provide fast memory access at a low cost per byte. It explains principles like locality of reference, cache block structure, hit/miss rates, and causes of cache misses. Basic optimizations discussed include increasing block size, cache size, associativity, and using multiple cache levels to reduce miss rates and penalties. Address translation through virtual memory and the translation lookaside buffer are also summarized.
18. Generating Detection and Prefetching Code Hot data stream v = v1v2...v{v.length} into a head v.head = v1v2...vheadLen and a tail v.tail = v{headLen+1}v{headLen+2}...v{v.length} .
30. CとJavaで効果を評価 [20] M. H. Lipasti, C. B. Wilkerson, and J. P. Shen. Value Locality and Load Value Prediction. In Proceedings of the second international conference on architectural support for programming languages and operating systems , pages 138–147, 1996.
31.
32. Kind: object F ield, A rray element, S calar variable
33.
34. 5 load-value predictors, 2048/infinite entries (i) lv , which predicts the last value for every load (ii) l4v , which predicts one of the last four values for every load (iii) st2d , which uses strides to predict loads (iv) fcm , which uses a representation of the context of preceding loads to predict a load (v) dfcm , which enhances fcm with strides.