Big Data comes from a variety of sources as human activities online generate vast amounts of data every day through intentional, accidental, and unknown means. This includes activities on social media, sensors, logs, and more. Content delivery networks (CDNs) can help distribute big data by caching content on servers located closer to users. While pushing content to CDNs offloads work from origin servers and improves performance, it also segments users and requires replication strategies to maintain consistency. Techniques include pre-computing static content from dynamic sources, pushing searches and other functions to CDNs, and experimenting with different cache models. Overall, CDNs can be an effective way to distribute big data but also introduce more complexity and dependence on the CDN
This was presented at NHN on Jan. 27, 2009.
It introduces Big Data, its storages, and its analyses.
Especially, it covers MapReduce debates and hybrid systems of RDBMS and MapReduce.
In addition, in terms of Schema-Free, various non-relational data storages are explained.
Facing trouble in distinguishing Big Data, Hadoop & NoSQL as well as finding connection among them? This slide of Savvycom team can definitely help you.
Enjoy reading!
This was presented at NHN on Jan. 27, 2009.
It introduces Big Data, its storages, and its analyses.
Especially, it covers MapReduce debates and hybrid systems of RDBMS and MapReduce.
In addition, in terms of Schema-Free, various non-relational data storages are explained.
Facing trouble in distinguishing Big Data, Hadoop & NoSQL as well as finding connection among them? This slide of Savvycom team can definitely help you.
Enjoy reading!
Big Data with Hadoop and HDInsight. This is an intro to the technology. If you are new to BigData or just heard of it. This presentation help you to know just little bit more about the technology.
Dev Lakhani, Data Scientist at Batch Insights "Real Time Big Data Applicatio...Dataconomy Media
Dev Lakhani, Data Scientist at Batch Insights talks on "Real Time Big Data Applications for Investment Banks and Financial Institutions" at the first Big Data Frankfurt event that took place at Die Zentrale, organised by Dataconomy Media
Neustar is a fast growing provider of enterprise services in telecommunications, online advertising, Internet infrastructure, and advanced technology. Neustar has engaged Think Big Analytics to leverage Hadoop to expand their data analysis capacity. This session describes how Hadoop has expanded their data warehouse capacity, agility for data analysis, reduced costs, and enabled new data products. We look at the challenges and opportunities in capturing 100′s of TB’s of compact binary network data, ad hoc analysis, integration with a scale out relational database, more agile data development, and building new products integrating multiple big data sets.
Speaker: Geetha Balasundaram, Developer at ThoughtWorks
From tools and technology to people and requirements, what's different in the data engineering space? App development is traditional now. All enterprises want to become data-guided. Data lake is good start yet the know-hows and do-hows are so many.
Experiences from building a data lake in the retail domain, the talk will be covering.
- What is this vast new space of data engineering,
- Why it is critical to think in terms of data rather than features
- How important it is to understand these technologies and create a data lake that is usable and insightful to business
Presentation regarding big data. The presentation also contains basics regarding Hadoop and Hadoop components along with their architecture. Contents of the PPT are
1. Understanding Big Data
2. Understanding Hadoop & It’s Components
3. Components of Hadoop Ecosystem
4. Data Storage Component of Hadoop
5. Data Processing Component of Hadoop
6. Data Access Component of Hadoop
7. Data Management Component of Hadoop
8.Hadoop Security Management Tool: Knox ,Ranger
Content Delivery Networks are on the front lines of application performance. ThousandEyes provides deep insights into CDN performance, including geographic load balancing, latency and availability. We'll help you detect and diagnose issues with your CDN providers and track down problems at the origin or the edge.
In these slides, we'll share how to:
1. Measure and baseline CDN performance.
2. Diagnose issues with specific files, caches or edge locations.
3. Share data with your CDN to resolve problems.
Big Data with Hadoop and HDInsight. This is an intro to the technology. If you are new to BigData or just heard of it. This presentation help you to know just little bit more about the technology.
Dev Lakhani, Data Scientist at Batch Insights "Real Time Big Data Applicatio...Dataconomy Media
Dev Lakhani, Data Scientist at Batch Insights talks on "Real Time Big Data Applications for Investment Banks and Financial Institutions" at the first Big Data Frankfurt event that took place at Die Zentrale, organised by Dataconomy Media
Neustar is a fast growing provider of enterprise services in telecommunications, online advertising, Internet infrastructure, and advanced technology. Neustar has engaged Think Big Analytics to leverage Hadoop to expand their data analysis capacity. This session describes how Hadoop has expanded their data warehouse capacity, agility for data analysis, reduced costs, and enabled new data products. We look at the challenges and opportunities in capturing 100′s of TB’s of compact binary network data, ad hoc analysis, integration with a scale out relational database, more agile data development, and building new products integrating multiple big data sets.
Speaker: Geetha Balasundaram, Developer at ThoughtWorks
From tools and technology to people and requirements, what's different in the data engineering space? App development is traditional now. All enterprises want to become data-guided. Data lake is good start yet the know-hows and do-hows are so many.
Experiences from building a data lake in the retail domain, the talk will be covering.
- What is this vast new space of data engineering,
- Why it is critical to think in terms of data rather than features
- How important it is to understand these technologies and create a data lake that is usable and insightful to business
Presentation regarding big data. The presentation also contains basics regarding Hadoop and Hadoop components along with their architecture. Contents of the PPT are
1. Understanding Big Data
2. Understanding Hadoop & It’s Components
3. Components of Hadoop Ecosystem
4. Data Storage Component of Hadoop
5. Data Processing Component of Hadoop
6. Data Access Component of Hadoop
7. Data Management Component of Hadoop
8.Hadoop Security Management Tool: Knox ,Ranger
Content Delivery Networks are on the front lines of application performance. ThousandEyes provides deep insights into CDN performance, including geographic load balancing, latency and availability. We'll help you detect and diagnose issues with your CDN providers and track down problems at the origin or the edge.
In these slides, we'll share how to:
1. Measure and baseline CDN performance.
2. Diagnose issues with specific files, caches or edge locations.
3. Share data with your CDN to resolve problems.
A Better Rich Media Experience & Video Analytics at Arkena with Apache HadoopReda Benzair
As digital consumption of rich media content explodes and with audience expectations at its peak, media providers have been challenged with not only delivering high-quality audience experiences but also the audience analytics in realtime to enable actionable insights for content publishers. Arkena, one of Europe’s leading media services organizations chose to power it’s analytical platform with Hortonworks Data Platform to cost effectively store and analyze over 3.5 terabytes of data per day. Join Hortonworks and Arkena as they share the industry challenges faced, the solution created which enables real-time and better analytics for their customers.
Climate Corporation: From Open Data to Risk and Farm Management Products for ...WorldBankGroupFinances
The Climate Corporation’s mission is to help all the world’s people and businesses adapt to climate change. They aim to help farmers around the world protect and improve their farming operations and profitability.
For their product offerings they are accessing and joining geographical and environmental data, agricultural production data and weather data at any location in the US.
Integrating multiple CDN providers at Etsy - Velocity Europe (London) 2013Marcus Barczak
Relying on a single content delivery network for your site can impose a number of flexibility limitations. By diversifying your CDN providers you can put the power back in your hands, allowing you to get the best of both worlds in terms of performance, reliability and cost. In this talk Marcus and Laurie will present Etsy’s recent work integrating multiple CDN providers to their site delivery infrastructure.
This presentation was delivered at Velocity Europe, November 2013
Future of CDN - Next 10 Years - Ahmet Ozalp, Akamai Technologies - DigiWorld ...IDATE DigiWorld
Ahmet Ozalp is the VP of International Products and Strategy for the Media (Sola) Division of Akamai. In this role Ozalp leads the product management and strategy initiatives for Akamai’s video, software and application delivery, storage and analytics products in EMEA and Asia Pacific regions. He also leads and supports key partnership initiatives with major Telecommunications companies in the region.
Prior to Akamai, Ozalp was the CEO of Telenity, a global provider of mobile advertising and social network solutions. Prior to Telenity, Ozalp was a Partner at Atlas Venture, where he led investments in digital media, mobile and online advertising. While at Atlas, Ozalp was the lead investor and board member of Extend Media (acquired by CSCO) and also had board level involvement with Ellacoya Networks (acquired by Arbor Networks), Isilon (IPO, Nasdaq:ISLN) and Gotuit Media.
Before joining Atlas, Ozalp held executive positions in marketing and product management at Narad Networks (acquired by Nasdaq:CIEN) and was also part of the founding team of Newnet, a telecom software startup acquired by ADC (Tyco Electronics). In the early part of his career, Ozalp was also a management consultant with Bain & Company’s technology and media practice.
Ozalp holds an MBA from the Wharton School of University of Pennsylvania, an MS in EE from Columbia University and two US patents.
"Big Data" is a term as ubiquitous as data itself, but it is more than just a way to describe the massive amount of information created every day. In fact, I would argue that it is more of a dynamic than a one-dimensional term.
In this presentation, I walk business audiences through the history and rise of big data, the four Vs of big Data, and end by looking at some practical applications and recommendations.
Originally presented on February 26, 2013 in Washington, DC at the US Chamber of Commerce.
Mohanbir Sawhney, Robert R. McCormick Tribune Foundation Clinical Professor of Technology Kellogg School of Management, Northwestern University presents at the 2012 Big Analytics Roadshow.
Companies are drinking from a fire hydrant of data that is too big, moving too fast and is too diverse to be analyzed by conventional database systems. Big Data is like a giant gold mine with large quantities of ore that is difficult to extract. To get value out of Big Data, enterprises need a new mindset and a new set of tools. They also need to know how to extract actionable insights from Big Data that can lead to competitive advantage. The Big Story of Big Data is not what Big Data is, but what it means for business value and competitive advantage.... read more: http://www.biganalytics2012.com/sessions.html#mohan_sawhney
Building a Business on Hadoop, HBase, and Open Source Distributed ComputingBradford Stephens
This is a talk on a fundamental approach to thinking about scalability, and how Hadoop, HBase, and Lucene are enabling companies to process amazing amounts of data. It's also about how Social Media is making the traditional RDBMS irrelevant.
KVM and docker LXC Benchmarking with OpenStackBoden Russell
Passive benchmarking with docker LXC and KVM using OpenStack hosted in SoftLayer. These results provide initial incite as to why LXC as a technology choice offers benefits over traditional VMs and seek to provide answers as to the typical initial LXC question -- "why would I consider Linux Containers over VMs" from a performance perspective.
Results here provide insight as to:
- Cloudy ops times (start, stop, reboot) using OpenStack.
- Guest micro benchmark performance (I/O, network, memory, CPU).
- Guest micro benchmark performance of MySQL; OLTP read, read / write complex and indexed insertion.
- Compute node resource consumption; VM / Container density factors.
- Lessons learned during benchmarking.
The tests here were performed using OpenStack Rally to drive the OpenStack cloudy tests and various other linux tools to test the guest performance on a "micro level". The nova docker virt driver was used in the Cloud scenario to realize VMs as docker LXC containers and compared to the nova virt driver for libvirt KVM.
Please read the disclaimers in the presentation as this is only intended to be the "chip of the ice burg".
IWMW 2003: C7 Bandwidth Management Techniques: Technical And Policy IssuesIWMW
Slides used in workshop session C7 on "Bandwidth Management Techniques: Technical And Policy Issues" at the IWMW 2003 event held at the University of Kent on 11-13 June 2003.
See http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-2003/sessions/#workshops-c
Website & Internet + Performance testingRoman Ananev
The presentation about how the site works on the Internet and what happens when you open it in your browser. What happens under the hood of the server and browser.
How to measure the performance of the CS-Cart project simply and without technical knowledge :) And of course, why all the online-performance-testing services lie, or dont provides a clear view ;)
https://www.simtechdev.com/cloud-hosting
---
Cloud hosting for CS-Cart, Multi-Vendor, WordPress, and Magento
by Simtech Development - AWS and CS-Cart certified hosting provider
free installation & migration | free 24/7 server monitoring | free daily backups | free SSL | and more...
Monitoring as an entry point for collaborationJulien Pivotto
In the last years, we have been building complex stacks, made from lots of components. All of this backed by multiple teams. This talk will present how you can use monitoring to look at the business side and have everyone looking at the same dashboards, making cooperation a reality.
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
This is the course that was presented by James Liddle and Adam Vile for Waters in September 2008.
The book of this course can be found at: http://www.lulu.com/content/4334860
Designing a Scalable Twitter - Patterns for Designing Scalable Real-Time Web ...Nati Shalom
Twitter is a good example for next generation real-time web applications, but building such an application imposes challenges such as handling an every growing volume of tweets and responses, as well as a large number of concurrent users, who continually *listen* for tweets from users (or topics) they follow. During this session we will review some of the key design principles addressing these challenges, including alternatives *NoSQL* alternatives and blackboard patterns. We will be using Twitter as a use case, while learning how to apply these to any real-time we application
The Heart of the Data Mesh Beats in Real-Time with Apache KafkaKai Wähner
If there were a buzzword of the hour, it would certainly be "data mesh"! This new architectural paradigm unlocks analytic data at scale and enables rapid access to an ever-growing number of distributed domain datasets for various usage scenarios.
As such, the data mesh addresses the most common weaknesses of the traditional centralized data lake or data platform architecture. And the heart of a data mesh infrastructure must be real-time, decoupled, reliable, and scalable.
This presentation explores how Apache Kafka, as an open and scalable decentralized real-time platform, can be the basis of a data mesh infrastructure and - complemented by many other data platforms like a data warehouse, data lake, and lakehouse - solve real business problems.
There is no silver bullet or single technology/product/cloud service for implementing a data mesh. The key outcome of a data mesh architecture is the ability to build data products; with the right tool for the job.
A good data mesh combines data streaming technology like Apache Kafka or Confluent Cloud with cloud-native data warehouse and data lake architectures from Snowflake, Databricks, Google BigQuery, et al.
Building collaborative HTML5 apps using a backend-as-a-service (HTML5DevConf ...João Parreira
Slide deck for talk at the 2013 HTML5 Developers Conference in San Francisco. Covers the main BaaS critical success factors: SMART (Scalable, Mobile-ready, Available, Real-time enabled and Truly secure)
From Chuck Neerdaels' presentation "Yahoo! Scalable Storage and Delivery Services" at the 2009 Cloud Computing Expo in Santa Clara, CA, USA. Here's the talk description on the Expo's site: http://cloudcomputingexpo.com/event/session/541
Streaming analytics on Google Cloud Platform, by Javier Ramirez, teowakijavier ramirez
Do you think you can write a system to get data from sensors across the world, do real time analytics, and display the data on a dashboard in under 100 lines of code? Would you like to add some monitoring and autoscaling too? And what about serverless? In this talk I'll show you all the technologies GCP offers to build such a system reliably and at scale.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
89. Even when you change the prices you still can pre-compute the page at some time – you don’t need to compute the content while the page is getting accessed
93. And even when you offer “Web 2.0” features such as rating through customers, you can asynchronously recompute (parts of) the pages using the new rating information
94. Some book store content modifications are not very critical. They can be updated lagged on geographical base
95. You see: many parts of an online bookstore seem dynamic, but can be actually pre-computed and delivered (lagged) as static content in web terms
96. It’s all about the frequency of change, distances, wideness of distribution and the big data pain
100. Even this ultimately dynamic sounding feature can be (partially) de-dynamized. Consider the full text index as static content, not necessarily the data itself
102. Sure, you cannot pre-compute the shopping cart. But maybe you also don’t need to synchronize a German customer’s cart to the whole world and keep it “local” instead
103. Owning big data doesn’t necessarily mean owning 100% dynamic data in terms of web
120. CDN is like a deputy. You make a contract, and it takes over parts of your platform
121. From here, it delivers to your users the content you tell it to deliver, but being much closer to them and much more intelligent than you when it comes to managing the load
122. CDN has its infrastructure including actual nodes directly at the backbones, offering web caching, server load balancing, request routing and, based upon these techniques, content delivery services
123. What you have seen earlier: CDN’s DNS infrastructure has each time returned a different IP address with TTL = 20
124. This is done either through DNS cache “splitting” or dynamically based on the IP address of the origin name server which made the DNS A query
125.
126. What you now can expect is that the returned IP address leads you to a load balancer – your gate to a whole sub-infrastructure of the CDN which balances between, for example, web caches
127. last mile last mile cache refresh inter-cache replication cache access cache access 1.2.3.4 5.6.7.8 A query 10.2.3.40 50.6.7.80 10.2.3.40 50.6.7.80 your servers caches caches name server
128. CDN uses different algorithms to decide where it routes user requests to: based upon current load, cost, location etc.
129. But in the end, your content gets delivered to the user. If it expires, CDN refreshes it from your servers in the background
130. According to HTTP/1.1, a web cache has to be controlled by: Freshness Validation Invalidation
131. As the very last step, you might have to offer the “last mile” – the very last application access, e.g. the last item view or similar. Here, the user hits your server