You think Varnish can cache responses by URL only? Not even close. Learn all different caching strategies available in Varnish, their benefits and consequences of usage. Learn how to and when to queue requests for the same endpoint, how to handle requests with conditional caching headers and how to have two levels of cache by tagging the responses.
Slides for presentation on ZooKeeper I gave at Near Infinity (www.nearinfinity.com) 2012 spring conference.
The associated sample code is on GitHub at https://github.com/sleberknight/zookeeper-samples
Basically everything you need to get started on your Zookeeper training, and setup apache Hadoop high availability with QJM setup with automatic failover.
Slides for presentation on ZooKeeper I gave at Near Infinity (www.nearinfinity.com) 2012 spring conference.
The associated sample code is on GitHub at https://github.com/sleberknight/zookeeper-samples
Basically everything you need to get started on your Zookeeper training, and setup apache Hadoop high availability with QJM setup with automatic failover.
A fast Message-Queue base on zookeeper.
Jafka mq is a distributed publish-subscribe messaging system cloning from Apache Kafka.
So it has following features:
(1)Persistent messaging with O(1) disk structures that provide constant time performance even with many TB of stored messages.
(2)High-throughput: even with very modest hardware single broker can support hundreds of thousands of messages per second.
(3)Explicit support for partitioning messages over broker servers and distributing consumption over a cluster of consumer machines while maintaining per-partition ordering semantics.
(4)Simple message format for many language clients.
- Understanding Time Series
- What's the Fundamental Problem
- Prometheus Solution (v1.x)
- New Design of Prometheus (v2.x)
- Data Compression Algorithm
http://highperformancecoredata.com/
Learn how to analyze, debug, and squeeze every last bit of performance out of Core Data. The standard Core Data implementation is very powerful and flexible but lacks performance. In this advanced session we will cover various performance analysis tools, model optimization, and various high performance concurrency models. This is an advanced discussion that assumes you have used Core Data before.
How and Why Prometheus' New Storage Engine Pushes the Limits of Time Series D...Docker, Inc.
The Prometheus monitoring system collects and stores time series data to give valuable insights over hosts, containers, and applications. Its storage engine was designed to be multiple orders of magnitude faster and more space efficient than, say, RRD or SQL storage. However, with the rise of orchestration systems such as Docker Swarm and Kubernetes, and their extensive use of techniques like rolling updates and auto-scaling, environments are becoming increasingly dynamic. This increases the strain on metrics collection systems. To deal with the challenges, a new storage engine has been developed from scratch, bringing a sharp increase in performance and enabling new features.
This talk will describe this new storage engine, its architecture, its data structures, and explain why and how it is well suited to gracefully handle high turnover rates of monitoring targets and provide consistent query performance.
Elasticsearch is well known as a highly scalable search engine that stores data in a structure optimized for language based searches but its capabilities and use cases don't stop there. In this tutorial, I'll give you a hands-on introduction to Elasticsearch and give you a glimpse at some of the fundamental concepts.
Database administration is challenging, and Elasticsearch is not an exception to that rule. In this tutorial, we will cover various administrative topics like Installation and Configuration, Cluster/Node management, Indexes management and Monitoring Cluster Health which will help you. Building applications on top of an Elasticsearch are also challenging and raise concerns about schema design. In this tutorial, we will cover developer-oriented topics like Mappings and Analysis, Aggregations and Schema Design that will help you build a robust application on top of Elasticsearch.
There will be lab sessions at the end of some chapters so please have your laptops with you.
A fast Message-Queue base on zookeeper.
Jafka mq is a distributed publish-subscribe messaging system cloning from Apache Kafka.
So it has following features:
(1)Persistent messaging with O(1) disk structures that provide constant time performance even with many TB of stored messages.
(2)High-throughput: even with very modest hardware single broker can support hundreds of thousands of messages per second.
(3)Explicit support for partitioning messages over broker servers and distributing consumption over a cluster of consumer machines while maintaining per-partition ordering semantics.
(4)Simple message format for many language clients.
- Understanding Time Series
- What's the Fundamental Problem
- Prometheus Solution (v1.x)
- New Design of Prometheus (v2.x)
- Data Compression Algorithm
http://highperformancecoredata.com/
Learn how to analyze, debug, and squeeze every last bit of performance out of Core Data. The standard Core Data implementation is very powerful and flexible but lacks performance. In this advanced session we will cover various performance analysis tools, model optimization, and various high performance concurrency models. This is an advanced discussion that assumes you have used Core Data before.
How and Why Prometheus' New Storage Engine Pushes the Limits of Time Series D...Docker, Inc.
The Prometheus monitoring system collects and stores time series data to give valuable insights over hosts, containers, and applications. Its storage engine was designed to be multiple orders of magnitude faster and more space efficient than, say, RRD or SQL storage. However, with the rise of orchestration systems such as Docker Swarm and Kubernetes, and their extensive use of techniques like rolling updates and auto-scaling, environments are becoming increasingly dynamic. This increases the strain on metrics collection systems. To deal with the challenges, a new storage engine has been developed from scratch, bringing a sharp increase in performance and enabling new features.
This talk will describe this new storage engine, its architecture, its data structures, and explain why and how it is well suited to gracefully handle high turnover rates of monitoring targets and provide consistent query performance.
Elasticsearch is well known as a highly scalable search engine that stores data in a structure optimized for language based searches but its capabilities and use cases don't stop there. In this tutorial, I'll give you a hands-on introduction to Elasticsearch and give you a glimpse at some of the fundamental concepts.
Database administration is challenging, and Elasticsearch is not an exception to that rule. In this tutorial, we will cover various administrative topics like Installation and Configuration, Cluster/Node management, Indexes management and Monitoring Cluster Health which will help you. Building applications on top of an Elasticsearch are also challenging and raise concerns about schema design. In this tutorial, we will cover developer-oriented topics like Mappings and Analysis, Aggregations and Schema Design that will help you build a robust application on top of Elasticsearch.
There will be lab sessions at the end of some chapters so please have your laptops with you.
Presentation Video: http://youtu.be/hZQc335WIvc
Goals:
Thorough understanding of Varnish.
Understanding of how VCL works and how to use it.
Know how varnish works with Drupal and Wordpress.
Debug using varnish tools.
You have amazing content and you want to get it to your users as fast as possible. In today’s industry, milliseconds matter and slow websites will never keep up. You can use a CDN but they are expensive, make you dependent on a third party to deliver your content, and can be notoriously inflexible. Enter Varnish, a powerful, open-source caching reverse proxy that lives in your network and lets you take control of how your content is managed and delivered. We’ll discuss how to install and configure Varnish in front of a typical web application, how to handle sessions and security, and how you can customize Varnish to your unique needs. This session will teach you how Varnish can help you give your users a better experience while saving your company and clients money at the same time.
Scale Your Data Tier With Windows Server App FabricChris Dufour
The distributed in-memory caching capabilities of Windows Server AppFabric will change how you think about scaling your Microsoft .NET-connected applications. Come learn how the distributed nature of the AppFabric cache allows large amounts of data to be stored in-memory for extremely fast access, how AppFabric's integration with Microsoft ASP.NET makes it easy to add low-latency data caching across the web farm, and discover the unique high availability features of AppFabric which will bring new degrees of scale and resilience to your data tier and your web applications.
Andrew Betts Web Developer, The Financial Times at Fastly Altitude 2016
Running custom code at the Edge using a standard language is one of the biggest advantages of working with Fastly’s CDN. Andrew gives you a tour of all the problems the Financial Times and Nikkei solve in VCL and how their solutions work.
In this short presentation, Subhash Yadav of Valuebound has explained about “Caching in Drupal 8.” A cache is a collection of data of the same type stored in a device for future use. Caches are found at every level of a content's journey from the original server to the browser.
Kafka High Availability in multi data center setup with floating Observers wi...HostedbyConfluent
Enabling High Availability in cluster setup that spawns different data centers is challenging but it is even more if we are using just two data-centers. Not ideal for Kafka HA at all. But this is reality for most organizations as they are using the same data-centers previously used for database HA.
In this presentation we will see how to use Kafka Observer feature to address this challenge with additional tweak to distribute load evenly among Observers and ordinary Brokers and make them floating between data-centers. The whole demo is supported by Infrastructure as a code automation trough Ansible.
Create a Varnish cluster in Kubernetes for Drupal caching - DrupalCon North A...Ovadiah Myrgorod
Varnish is a caching proxy usually used for high profile Drupal sites. However, configuring Varnish is not an easy task that requires a lot of work. It is even more difficult when it comes to creating a scalable cluster of Varnish nodes.
Fortunately, there is a solution. I’ve been working on kube-httpcache project (https://github.com/mittwald/kube-httpcache) that takes care of many things such as routing, scaling, broadcasting, config-reloading, etc...
If you need to run more than one instance of Varnish, this session is for you. You will learn how to:
* Launch a single instance of Varnish in Kubernetes.
* Configure Varnish for Drupal.
* Scale Varnish from 1 to N nodes as part of the cluster.
* Make your Varnish cluster resilient.
* Reload Varnish configs on the fly.
* Properly invalidate cache for multiple Varnish nodes.
This session requires some basic understanding of Docker and Kubernetes; however, I will provide some intro if you are new to it.
Join this session and enjoy!
Built-in query caching for all PHP MySQL extensions/APIsUlf Wendel
Query caching boosts the performance of PHP MySQL applications. Caching can be done on the database server or at the web clients. A new mysqlnd plugin adds query caching to all PHP MySQL extension: written in C, immediately usable with any PHP application because of no API changes, supports Memcache, APC, SQLite and main memory storage, integrates itself smoothless into existing PHP deployment infrastructure, helps you to scale by client, ... Enjoy!
Similar to In-depth caching in Varnish - GOG Varnish Meetup, march 2019 (20)
GWINT: Przetwarzanie rozproszone z wykorzystaniem komunikacji asynchronicznej...GOG.com dev team
GWINT jako gra online nastawiona na obsługę milionów graczy wymaga wyjątkowo skalowalnej architektury. Opowiemy między innymi o tym:
– jak API zbudowane na Symfony w modelu mikro serwisów wykorzystuje asynchroniczną komunikację pomiędzy usługami i klientem (system notyfikacji),
– jak optymalizujemy procesowanie zdarzeń wymagających współpracy kilku usług (kolejki zadań), jak monitorujemy i testujemy integrację poszczególnych webservice’ów. Przewiną się także technologie takie jak: PHP7, HHVM, RabbitMq, Redis,
Krzysztof Sobczak GOG.com Team
Event sourcing w pigułce czyli podróże w czasie i odporność na coraz to bardziej kreatywne wymagania biznesowe. Kiedy, jak i po co tworzyć naturalny log wydarzeń Twojej aplikacji - plusy i minusy wzorca, przykłady implementacji, event sourcing a CQRS.
Design Thinking Workshop: Empathy in the User ExperienceGOG.com dev team
Wprowadzenie do Design Thinking oraz warsztaty z empatii w projektowaniu User Experience. Warsztaty odbyły się w czasoe WUD na SWPS we Wrocławiu.
World Usability day, SWPS, Wrocław, 2015.11.14
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
3. Try Varnish 6 in Docker!
$ docker run lukaszlach/varnish:6
Debug: Version: varnish-6.0.1 revision 8d54bec5330c293049ebf...
Debug: Platform: Linux,4.9.125-linuxkit,x86_64,-junix...
Debug: Child (17) Started
Info: Child (17) said Child starts
4. Varnish is a caching reverse proxy that can:
● pick a HTTP backend using one of available
load-balancing strategies
● pass the request to a backend and nothing more
● additionally cache the response if you sent
a valid caching header or used VCL for it
● serve a stalled response from cache while still updating it
in background with request coalescing
● serve a stalled response when backend servers are down
5. VCL is a programming language
● Varnish translates VCL into binary code which is then
executed when requests arrive
● VCL files are organized into subroutines
● if you do not call an action in your subroutine and it
reaches the end Varnish will execute a built-in VCL
● does not have any loops, jump statements, custom
variables, classes or functions
8. vcl_recv # Called at the beginning of a request
1. pass every method except GET and HEAD
2. pass if request contains Cookie or Authorization header
3. hash
9. vcl_hash # Called to create a cache key for the request
1. add request URL to the cache key
2. add either the Host header or server IP to the cache key
3. lookup object in cache
10. vcl_hit # Called when a cache lookup is successful
1. serve the cached object if ttl is greater than 0 seconds
(a valid cache object)
2. serve the cached object if ttl+grace is greater than 0 seconds
(a stalled object in grace mode) and trigger a background fetch
3. miss
11. vcl_backend_response
# Called after the response headers have been successfully retrieved
1. deliver if response is uncacheable, create a hit-for-miss object
2. deliver and mark as hit-for-miss for 2 minutes if the response
● beresp.ttl is less than 0 seconds
● contains Set-Cookie header
● contains Surrogate-control header with no-store flag set
● does not contain Surrogate-control header but Cache-Control
has any of no-cache, no-store or private values set
● contains Vary header equal to *
3. deliver and store the object in cache
12. ttl
Before Varnish runs vcl_backend_response, the beresp.ttl variable has
already been set to a value.
beresp.ttl is initialized with the first value it finds among:
● The s-maxage variable in the Cache-Control response header
● The max-age variable in the Cache-Control response header
● The Expires response header
● The default_ttl parameter
13. Backend error ≠ HTTP error
When Varnish can not connect to the backend or if the request timed
out, this is considered a failed fetch and vcl_backend_error is
called.
14. Backend error ≠ HTTP error
When backend returns a 5xx or any other valid HTTP response, for
Varnish this is a successful fetch and vcl_backend_response is
called.
If this is a background fetch and the response is uncacheable, the
previously cached object is erased from the cache and may be replaced
with a hit-for-miss object.
15. Gracefully fallback to cache
Stop cache insertion and do not remove previously cached object from
cache when a backend fetch returns a 5xx error.
sub vcl_backend_response {
if (beresp.status >= 500 && bereq.is_bgfetch) {
return (abandon);
}
}
VCL
16. Not modified
If a cache object is being refreshed and backend returns a 304 response,
Varnish amends beresp before calling vcl_backend_response:
● If the gzip status changed, Content-Encoding is unset
and any Etag is weakened
● Any headers not present in the 304 response
are copied from the existing cache object
● The status gets set to 200
17. If you return (pass)
in vcl_recv Varnish never
has a chance to cache
the response and so will
never send 200 or 304
by itself for any request.
The same flow applies
to hit on a hit-for-pass
object.
18. If you return (hash)
in vcl_recv and Varnish
caches the response then
it can start sending 200
and 304 responses,
also when the cache
object is stalled.
19. Cached status codes
The following status codes are cached by default:
200: OK
203: Non-Authoritative Information
300: Multiple Choices
301: Moved Permanently
302: Moved Temporarily
304: Not modified
307: Temporary Redirect
410: Gone
404: Not Found
20. max-age=0 ≠ no-cache
max-age=0 tells caches (and clients) the response is stale from the
very beginning and so they should revalidate the response before using
a cached copy. They may be allowed to serve it as a stale content.
no-cache tells caches they must revalidate the response before using
a cached copy and will never serve it staled.
21. Request coalescing
The benefit of a graced object in cache is that Varnish queues requests
for the same cache key triggering one background fetch, as long as it is
running all clients are served a stalled object.
So eventually setting ttl=0s and grace=1m enables request
coalescing for one minute while still constantly refreshing the cached
object from the backend.
22. The first request for the
resource creates a graced
object in the cache.
Further requests to this
resource are served
from cache and a
Background job is
triggered, one backend
fetch at a time.
23. The built-in VCL
Run varnishd -x builtin to view the built-in VCL.
$ docker run lukaszlach/varnish:6
varnishd -x builtin
/*-
* Copyright (c) 2006 Verdens Gang AS
* Copyright (c) 2006-2015 Varnish Software AS
...
25. Modifying the cache key
The easiest way to create a separate cache object
for the same URL and Host is to call hash_data in
vcl_hash with a value we want to differentiate on.
26. hash_data
As vcl_hash is a client function so you can only use the req object.
sub vcl_hash {
hash_data(req.http.Any-Request-Header);
# unnecessary if, effect is the same as above
if (req.http.X-Country) {
hash_data(req.http.X-Country);
}
# having a default value makes more sense
if (req.http.X-Device) {
hash_data(req.http.X-Device);
} else {
hash_data("desktop");
}
}
VCL
27. Know the cache key
You can pass any value from VCL to varnishlog and varnishncsa, including
the cache key in an encoded format, individual parts of the key are not available.
sub vcl_deliver {
if (obj.hits > 0) {
std.log("cache_key:" + blob.encode(blob=req.hash, encoding=HEX));
}
}
$ varnishncsa -F '%r %{VCL_Log:cache_key}x'
GET /page1 1097cdf0f8af3b5daa2a22e85c7b7ada50bbd295efc7fcfb37c54eaebd832b43
GET /page2 49b6a5b0905ff6e20824cd2569f150ef694776c522a152674c1bd18c2ed51268
GET /page3 7f9ce7c3f05f8a34af3b5daa2a22e85ce345b750bd295efc7fcf23b47c55cebd
VCL
29. Varying the response
The Vary HTTP response header tells Varnish to create
a separate cache object for the same cache key by using
values of the request headers pointed by name.
30. Vary
Sending Vary: X-Country, X-Device response header from backend
results in /page being cached in number of copies equal
to all combinations of these two header values.
X-Country: us | X-Device: desktop
X-Country: us | X-Device: mobile
X-Country: pl | X-Device: desktop
X-Country: pl | X-Device: mobile
GET /page
4 variations of /page stored in cache
31. Finding the variation
All headers pointed by a Vary header must be present
in the req.http object before cache lookup
is triggered (return lookup) so that Varnish
knows exactly which object is to be served.
32. Difference from hash_data
Using Vary leaves you the possibility to PURGE an URL using
a single request to delete all variations of the cached object
because it does not change the cache key.
When using hash_data you need to pass all headers
and values used in vcl_hash, so eventually PURGE
all copies manually one by one using separate requests.
33. Vary once
Do not pass the Vary header to the client as this may create copies on
his side, but also expose internal header names.
# do not vary on the client side
sub vcl_deliver {
unset resp.http.Vary;
}
# do not vary in Varnish
sub vcl_backend_response {
unset beresp.http.Vary;
}
VCL
35. vmod_key
Varnish module (vmod) that allows to tag
a response with several values.
Later on you can purge or softpurge all responses
containing defined tag using a single PURGE request.
http://bit.ly/vmod_xkey
36. Specifying response tags
Tags are specified in the xkey response header.
Multiple tags can be specified per header line with a space
separator or in multiple response headers.
xkey: tag/1 tag2 tag-3
37. The first two requests create
cache objects tagged with
several values.
Backend or any other
client allowed to PURGE
can now remove all
cache entries by
pointing only a tag.
38. Purging with cache in mind
Softpurge works like purge but keeps the grace and keep values
of a cached object.
● sets ttl to 0s
● allows to reply with 304 to conditional requests
● allows to serve stale content to clients if the backend is unavailable
● enables asynchronous and automatic backend fetching
to update object
39. Caching strategy Summary
Default behaviour Caches all GET requests with valid Cache-Control or fallbacks
to default TTL, respects conditional caching headers like ETag
and Last-Modified. Different URL or query string is needed for a
separate cache entry.
Modifying cache
key in vcl_hash
Easy way if you work with request header or values available in
VCL. Makes purging harder.
Using the Vary
header
Although, like in the above solution, you need to work with
request and VCL values, this solution allows to pass the
control on what to cache with which values to backend.
Tagging the
response
Easy and powerful, however it is Varnish-specific and requires
custom implementation on the backend side.