How can we rewrite our application to handle new challenges coming from applications that need to scale over the cloud? Use patterns, so you can use the best technology at every tier
Sn wf12 amd fabric server (satheesh nanniyur) oct 12Satheesh Nanniyur
Big Data has influenced the data center architecture in ways unimagined before. This presentation explores the Fabric Compute and Storage architectures to enable extreme scale-out, low power, high density Big Data deployments
Sn wf12 amd fabric server (satheesh nanniyur) oct 12Satheesh Nanniyur
Big Data has influenced the data center architecture in ways unimagined before. This presentation explores the Fabric Compute and Storage architectures to enable extreme scale-out, low power, high density Big Data deployments
Overview of SaaS and online services and the business reasons why organisations should be considering these. Delivered by Ben Kepes at Intergen's ON seminar series in May 2010.
Infochimps #1 Big Data Platform for the CloudBrian Krpec
The Infochimps Platform is the simplest, fastest, and most flexible way to implement proven big data infrastructure in the cloud. Scalably and affordably ingest data from wherever you need — your in-house systems, external data feeds, data from the web, or our Data Marketplace. Make it useful with in-stream data decoration and augmentation. Store and analyze it in the best place for your application. Hadoop, NoSQL, real-time analytics — how do you tie it all together? The Infochimps Platform takes the mystery and difficulty out of big data and seamlessly integrates it with your existing environment, so you can focus on gaining business insights from your data fast.
Azure SQL Database Managed Instance is a new flavor of Azure SQL Database that is a game changer. It offers near-complete SQL Server compatibility and network isolation to easily lift and shift databases to Azure (you can literally backup an on-premise database and restore it into a Azure SQL Database Managed Instance). Think of it as an enhancement to Azure SQL Database that is built on the same PaaS infrastructure and maintains all it's features (i.e. active geo-replication, high availability, automatic backups, database advisor, threat detection, intelligent insights, vulnerability assessment, etc) but adds support for databases up to 35TB, VNET, SQL Agent, cross-database querying, replication, etc. So, you can migrate your databases from on-prem to Azure with very little migration effort which is a big improvement from the current Singleton or Elastic Pool flavors which can require substantial changes.
Enterprise Content Management and Microsoft Office SharePoint Server 2007 - U...Dave Healey
SharePoint Server 2007 is changing the way cutomers think about Information Management; from a specialized vertical application to broadly available, horizontal infrastructure. Understand how SharePoint is changing the ECM marketplace and learn how to take advantage of the opportunity to grow your business.
Neustar is a fast growing provider of enterprise services in telecommunications, online advertising, Internet infrastructure, and advanced technology. Neustar has engaged Think Big Analytics to leverage Hadoop to expand their data analysis capacity. This session describes how Hadoop has expanded their data warehouse capacity, agility for data analysis, reduced costs, and enabled new data products. We look at the challenges and opportunities in capturing 100′s of TB’s of compact binary network data, ad hoc analysis, integration with a scale out relational database, more agile data development, and building new products integrating multiple big data sets.
Engineering Interoperable and Reliable SystemsRick Warren
The features of a communication technology that yield the properties of interoperability and reliability can be visualized in layers: technical (at the level of bytes), syntactic (at the level of messages), semantic (at the level of data, i.e. what the messages refer to), and so on. Real-world systems require at least data-level interoperability and reliability. The question is: will you acquire something that already supports those capabilities, or will you build it atop something that doesn't? This talk compares and contrasts DDS and AMQP as technology exemplars in each category.
4 Ways To Save Big Money in Your Data Center and Private Cloudtervela
The thirst for real-time access to rich content and big data is turning enterprise datacenters into private computing clouds. However, making exabyte-scale data available and responsive to a global application network gets expensive. Fortunately there are things you can do to save big money in these sophisticated new environments. In this presentation you will learn how to save money, avoid costs, and create significant efficiencies in your private cloud by: Consolidating databases and data warehouses, Slashing big data storage and storage-based data replication , Replacing expensive middleware, and Eliminating cold disaster recovery
Overview of SaaS and online services and the business reasons why organisations should be considering these. Delivered by Ben Kepes at Intergen's ON seminar series in May 2010.
Infochimps #1 Big Data Platform for the CloudBrian Krpec
The Infochimps Platform is the simplest, fastest, and most flexible way to implement proven big data infrastructure in the cloud. Scalably and affordably ingest data from wherever you need — your in-house systems, external data feeds, data from the web, or our Data Marketplace. Make it useful with in-stream data decoration and augmentation. Store and analyze it in the best place for your application. Hadoop, NoSQL, real-time analytics — how do you tie it all together? The Infochimps Platform takes the mystery and difficulty out of big data and seamlessly integrates it with your existing environment, so you can focus on gaining business insights from your data fast.
Azure SQL Database Managed Instance is a new flavor of Azure SQL Database that is a game changer. It offers near-complete SQL Server compatibility and network isolation to easily lift and shift databases to Azure (you can literally backup an on-premise database and restore it into a Azure SQL Database Managed Instance). Think of it as an enhancement to Azure SQL Database that is built on the same PaaS infrastructure and maintains all it's features (i.e. active geo-replication, high availability, automatic backups, database advisor, threat detection, intelligent insights, vulnerability assessment, etc) but adds support for databases up to 35TB, VNET, SQL Agent, cross-database querying, replication, etc. So, you can migrate your databases from on-prem to Azure with very little migration effort which is a big improvement from the current Singleton or Elastic Pool flavors which can require substantial changes.
Enterprise Content Management and Microsoft Office SharePoint Server 2007 - U...Dave Healey
SharePoint Server 2007 is changing the way cutomers think about Information Management; from a specialized vertical application to broadly available, horizontal infrastructure. Understand how SharePoint is changing the ECM marketplace and learn how to take advantage of the opportunity to grow your business.
Neustar is a fast growing provider of enterprise services in telecommunications, online advertising, Internet infrastructure, and advanced technology. Neustar has engaged Think Big Analytics to leverage Hadoop to expand their data analysis capacity. This session describes how Hadoop has expanded their data warehouse capacity, agility for data analysis, reduced costs, and enabled new data products. We look at the challenges and opportunities in capturing 100′s of TB’s of compact binary network data, ad hoc analysis, integration with a scale out relational database, more agile data development, and building new products integrating multiple big data sets.
Engineering Interoperable and Reliable SystemsRick Warren
The features of a communication technology that yield the properties of interoperability and reliability can be visualized in layers: technical (at the level of bytes), syntactic (at the level of messages), semantic (at the level of data, i.e. what the messages refer to), and so on. Real-world systems require at least data-level interoperability and reliability. The question is: will you acquire something that already supports those capabilities, or will you build it atop something that doesn't? This talk compares and contrasts DDS and AMQP as technology exemplars in each category.
4 Ways To Save Big Money in Your Data Center and Private Cloudtervela
The thirst for real-time access to rich content and big data is turning enterprise datacenters into private computing clouds. However, making exabyte-scale data available and responsive to a global application network gets expensive. Fortunately there are things you can do to save big money in these sophisticated new environments. In this presentation you will learn how to save money, avoid costs, and create significant efficiencies in your private cloud by: Consolidating databases and data warehouses, Slashing big data storage and storage-based data replication , Replacing expensive middleware, and Eliminating cold disaster recovery
[PASS Summit 2016] Azure DocumentDB: A Deep Dive into Advanced FeaturesAndrew Liu
Let's talk about how you can get the most out of Azure DocumentDB. In this session we will dive deep into the mechanics of DocumentDB and explain the various levers available to tune performance and scale. From partitioned collections to global databases to advanced indexing and query features - this session will equip you with the best practices and nuggets of information that will become invaluable tools in your toolbox for building blazingly fast large-scale applications.
This presentation provides an introduction to Azure DocumentDB. Topics include elastic scale, global distribution and guaranteed low latencies (with SLAs) - all in a managed document store that you can query using SQL and Javascript. We also review common scenarios and advanced Data Sciences scenarios.
DocumentDB is a powerful NoSQL solution. It provides elastic scale, high performance, global distribution, a flexible data model, and is fully managed. If you are looking for a scaled OLTP solution that is too much for SQL Server to handle (i.e. millions of transactions per second) and/or will be using JSON documents, DocumentDB is the answer.
Apresentação sobre o Azure DocumentDB (solução que integra o Microsoft Azure) realizada no primeiro meetup do grupo Azure Fridays São Paulo em 25/11/2016.
Cloud architecture and deployment: The Kognitio checklist, Nigel Sanctuary, K...CloudOps Summit
CloudOps Summit 2012, Frankfurt, 20.9.2012 Track 2 - Build and Run
by Nigel Sanctuary, VP Propositions at Kognitio (www.kognitio.com)
http://cloudops.de/sprecher/#nigelsanctuary
Find the video of this talk at http://youtu.be/wQrHQNOMlKc
Slides for the JavaOne 2012 session on Java batch for Cost Optimized Efficiency. This session talks about the importance of Java Batch in Enterprise computing and provides a reference architecture, overview of the JSR 352 and the WebSphere Batch solutions.
A view on architectural considerations and models for the emerging context of software plus services and in view of technologies such as Windows Azure.
Predstavljeni so načini, kako uporabniki rešitev SAP-a ob uporabi združene podatkovne infrastrukture NetApp dosegajo najvišje stopnje učinkovitosti in prilagodljivosti poslovanja.
Cloud computing revolutionized application design, and changed the way people think about infrastructure. The rise of cloud computing coincided with a new generation of applications and services that required scale. New architecture and design had to take into account low latency network connectivity, geographic distribution, large real-time data stores, the ability to meet demand (while not knowing exactly how much demand to handle), and so much more. We refer to this as Internet Scale.
Yet most discussion of scale and cloud revolves around compute as virtualized instances, which have defined configurations and constrained options. Delivering on the promise of Internet Scale involves substantial upfront design, and a comprehensive understanding of the entire architecture - from the underlying hardware, to the operating system, the application stack, services, and deployment. And, it involves choice - choices you should make based on your requirements. Join us for a discussion on the many facets of Internet Scale, and how it can apply to your applications and services.
Businesses are generating more data than ever before.
Doing real time data analytics requires IT infrastructure that often needs to be scaled up quickly and running an on-premise environment in this setting has its limitations.
Organisations often require a massive amount of IT resources to analyse their data and the upfront capital cost can deter them from embarking on these projects.
What’s needed is scalable, agile and secure cloud-based infrastructure at the lowest possible cost so they can spin up servers that support their data analysis projects exactly when they are required. This infrastructure must enable them to create proof-of-concepts quickly and cheaply – to fail fast and move on.
Normalmente parliamo e presentiamo Azure IoT (Central) con un taglio un po' da "maker". In questa sessione, invece, vediamo di parlare allo SCADA engineer. Come si configura Azure IoT Central per il mondo industriale? Dov'è OPC/UA? Cosa c'entra IoT Plug & Play in tutto questo? E Azure IoT Central...quali vantaggi ci da? Cerchiamo di rispondere a queste e ad altre domande in questa sessione...
Allo sviluppatore Azure piacciono i servizi PaaS perchè sono "pronti all'uso". Ma quando proponiamo le nostre soluzioni alle aziende, ci scontriamo con l'IT che apprezza gli elementi infrastrutturali, IaaS. Perchè non (ri)scoprirli aggiungendo anche un pizzico di Hybrid che con il recente Azure Kubernetes Services Edge Essentials si può anche usare in un hardware che si può tenere anche in casa? Quindi scopriremo in questa sessione, tra gli altri, le VNET, le VPN S2S, Azure Arc, i Private Endpoints, e AKS EE.
Static abstract members nelle interfacce di C# 11 e dintorni di .NET 7.pptxMarco Parenzan
Did interfaces in C# need evolution? Maybe yes. Are they violating some fundamental principles? We see. Are we asking for some hoops? Let's see all this by telling a story (of code, of course)
Azure Synapse Analytics for your IoT SolutionsMarco Parenzan
Let's find out in this session how Azure Synapse Analytics, with its SQL Serverless Pool, ADX, Data Factory, Notebooks, Spark can be useful for managing data analysis in an IoT solution.
Power BI Streaming Data Flow e Azure IoT Central Marco Parenzan
Dal 2015 gli utilizzatori di Power BI hanno potuto analizzare dati in real-time grazie all'integrazione con altri prodotti e servizi Microsoft. Con streaming dataflow, si porterà l'analisi in tempo reale completamente all'interno di Power BI, rimuovendo la maggior parte delle restrizioni che avevamo, integrando al contempo funzionalità di analisi chiave come la preparazione dei dati in streaming e nessuna creazione di codice. Per vederlo in funzione, studieremo un caso specifico di streaming come l'IoT con Azure IoT Central.
Power BI Streaming Data Flow e Azure IoT CentralMarco Parenzan
Dal 2015 gli utilizzatori di Power BI hanno potuto analizzare dati in real-time grazie all'integrazione con altri prodotti e servizi Microsoft. Con streaming dataflow, si porterà l'analisi in tempo reale completamente all'interno di Power BI, rimuovendo la maggior parte delle restrizioni che avevamo, integrando al contempo funzionalità di analisi chiave come la preparazione dei dati in streaming e nessuna creazione di codice. Per vederlo in funzione, studieremo un caso specifico di streaming come l'IoT con Azure IoT Central.
Power BI Streaming Data Flow e Azure IoT CentralMarco Parenzan
Since 2015, Power BI users have been able to analyze data in real-time thanks to the integration with other Microsoft products and services. With streaming dataflow, you'll bring real-time analytics completely within Power BI, removing most of the restrictions we had, while integrating key analytics features like streaming data preparation and no coding. To see it in action, we will study a specific case of streaming such as IoT with Azure IoT Central.
What are the actors? What are they used for? And how can we develop them? And how are they published and used on Azure? Let's see how it's done in this session
Generic Math, funzionalità ora schedulata per .NET 7, e Azure IoT PnP mi hanno risvegliato un argomento che nel mio passato mi hanno portato a fare due/tre viaggi, grazie all'Università di Trieste, a Cambridge (2006/2007 circa) e a Seattle (2010, quando ho parlato pubblicamente per la prima volta di Azure :) e che mi ha fatto conoscere il mito Don Box!), a parlare di codice in .NET che aveva a che fare con la matematica e con la fisica: le unità di misura e le matrici. L'avvento dei Notebook nel mondo .NET e un vecchio sogno legato alla libreria ANTLR (e tutti i miei esercizi di Code Generation) mi portano a mettere in ordine 'sto minestrone di idee...o almeno ci provo (non so se sta tutto in piedi).
322 / 5,000
Translation results
.NET is better every year for a developer who still dreams of developing a video game. Without pretensions and without talking about Unity or any other framework, just "barebones" .NET code, we will try to write a game (or parts of it) in the 80's style (because I was a kid in those years). In Christmas style.
Building IoT infrastructure on edge with .net, Raspberry PI and ESP32 to conn...Marco Parenzan
IoT scenarios necessarily pass through the Edge component and the Raspberry PI is a great way to explore this world. If we need to receive IoT events from sensors, how do I implement an MQTT endpoint? Kafka is a clever way to do this. And how do I process the data? Kafka? Spark? Rabbit ?. How do we write custom code for these environments? .NET, now in version 6 is another clever way to do it! And maybe, we can also communicate with Azure. We'll see in this session if we can make it all work!
How can you handle defects? If you are in a factory, production can produce objects with defects. Or values from sensors can tell you over time that some values are not "normal". What can you do as a developer (not a Data Scientist) with .NET o Azure to detect these anomalies? Let's see how in this session.
Quali vantaggi ci da Azure? Dal punto di vista dello sviluppo software, uno di questi è certamente la varietà dei servizi di gestione dei dati. Questo ci permette di cominciare a non essere SQL centrici ma utilizzare il servizio giusto per il problema giusto fino ad applicare una strategia di Polyglot Persistence (e vedremo cosa significa) nel rispetto di una corretta gestione delle risorse IT e delle pratiche di DevOps.
C'è ancora diffidenza nei confronti dell'Internet of Things e il costo delle soluzioni custom non aiuta. Azure IoT Central è un servizio SaaS personalizzabile che rende accessibile a costi sostenibili. Vediamo quali sonole peculiarità di questo servizio.
Come puoi gestire i difetti? Se sei in una fabbrica, la produzione può produrre oggetti con difetti. Oppure i valori dei sensori possono dirti nel tempo che alcuni valori non sono "normali". Cosa puoi fare come sviluppatore (non come Data Scientist) con .NET o Azure per rilevare queste anomalie? Vediamo come in questa sessione.
It happens that we have to develop several services and deploy them in Azure. They are small, repetitive but different, often not very different. Why not use code generation techniques to simplify the development and implementation of these services? Let's see with .NET comes to meet us and helps us to deploy in Azure.
Running Kafka and Spark on Raspberry PI with Azure and some .net magicMarco Parenzan
IoT scenarios necessarily pass through the Edge component and the Raspberry PI is a great way to explore this world. If we need to receive IoT events from sensors, how do I implement an MQTT endpoint? Kafka is a clever way to do this. And how do I process the data in Kafka? Spark is another clever way of doing this. How do we write custom code for these environments? .NET, now in version 6 is another clever way to do it! And maybe, we also communicate with Azure. We'll see in this session if we can make it all work!
Time Series Anomaly Detection with Azure and .NETTMarco Parenzan
f you have any device or source that generates values over time (also a log from a service), you want to determine if in a time frame, the time serie is correct or you can detect some anomalies. What can you do as a developer (not a Data Scientist) with .NET o Azure? Let's see how in this session.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
9. Traditional CRUD application
Let’s take a step back. Why do we build applications like we do today?
It started with a stack of paper…
…that needed to be keyed …and along came
into the machine the CRUD app!
10. Traditional scale-up
architecture
Common characteristics
synchronous processes
sequential units of work
tight coupling
stateful
pessimistic concurrency
clustering for HA
vertical scaling
14. The ability to increase or reduce the
number of resources without Scalability
affecting the end user experience.
15. Traditional scale-up architecture
To scale, get bigger
servers
expensive
has scaling limits
inefficient use of
resources
web app server data store
21. Stale Data
In computer processing, if a
processor changes the value of
an operand and then, at a
subsequent time, fetches the
operand and obtains the old
rather than the new value of
the operand, then it is said to
have seen stale data.
22. Why is CQRS needed?
To understand this better, let’s look at a basic multi-user system.
Retrieve data
Retrieve data
User is looking at stale
data Modify data
Stale data is inherent in a multi-user system.
The machine is now the source of truth…not a piece of paper.
23. Why is CQRS needed?
All of this to provide scalability & a
consistent view of the data.
But did we succeed?
24. Why is CQRS needed?
Back to our CRUD app…
?
?
? ?
?
?
Where is the consistency? We have stale data all over the place!
25. Why is new model needed?
Since stale data always exists, is all of this
complexity really needed to scale?
No, we need a different approach.
One that offers extreme scalability
Inherently handle multiple users
Can grow to handle complex problems
without growing development costs
26.
27. Use more pieces, not bigger pieces
LEGO 7778 Midi-scale Millennium
Falcon
• 9.3 x 6.7 x 3.2 inches (L/W/H)
• 356 pieces
LEGO 10179 Ultimate Collector's Millennium Falcon
• 33 x 22 x 8.3 inches (L/W/H)
• 5,195 pieces
30. Fundamental concepts
Horizontal scaling
Small pieces, loosely coupled
Distributed computing best practices
asynchronous processes (event-driven design)
parallelization
idempotent operations (handle duplicity)
optimistic concurrency
shared nothing architecture
fault-tolerance by redundancy
and replication
etc.
31. Cloud-Scale Architecture
Design Data & Content
Horizontal scaling De-normalization
Service-oriented composition Logical partitioning
Eventual consistency Distributed in-memory cache
Fault tolerant (expect failures) Diverse data storage options
Security (persistent & transient, relational &
unstructured, text & binary, read &
Claims-based authentication &
write, etc.)
access control
Processes
Federated identity
Loosely coupled components
Data encryption & key mgmt.
Parallel & distributed processing
Management
Asynchronous distributed
Policy-driven automation
communication
Aware of application lifecycles
Idempotent (handle duplicity)
Handle dynamic data schema and
Isolation (separation of concerns)
configuration changes
32. Scale-out architecture
Common
characteristics
small logical
units of work
loosely-coupled
processes
stateless web app server data store
event-driven
design
optimistic web app server data store
concurrency
partitioned data
redundancy
fault-tolerance
re-try-based
recoverability
33. Scale-out architecture
To scale, add
more servers web app server data store
not bigger
servers web app server data store
When problems
occur web app server data store
smaller failure
impact
web app server data store
higher web app server data store
perceived
availability web app server data store
simpler
recovery
34. Scale-out architecture + distributed
computing parallel tasks
Scalable
performance at web app server data store
extreme scale
asynchronous web app server data store
processes
parallelization web app server data store
smaller
footprint web app server data store
optimized
perceived response time
resource usage web app server data store
reduced async tasks
response time
web app server data store
improved
throughput
35.
36. How does CQRS work?
Task-based UI
Why rethink the User Interface?
» Grids don’t capture the user’s intent
37. CQRS
As a concept
A set of principles
A way of thinking about software
architecture.
As a pattern
Is a way of designing and developing
scalable and robust enterprise
solutions where reads are
independent from writes.
What is not
The CQRS pattern says nothing about
how this should be implemented
39. Common components of
the CQRS pattern
Task-based UI
ViewModels
Commands
Domain Objects
Events
Persistent View Model
40. Task-Driven User Interface
Scrum-based analysis
Collect user-stories
Scenario
Each user-story is not
an entity
Every user story is a
task
41. How does CQRS work?
Rethinking the User Interface
Adjust UI design to capture intent
what did the user really mean?
intent becomes a command
Why is intent important?
Last name changed because of
misspelling
Last name changed because of
marriage
Last name changed because of
divorce
User interface can affect your architecture
42. View Models
ViewModel
Only Data
Flat, only strings
Why DomainModel is not
good?
Views should not know how
to traverse the DM
Views usually need less
properties
Using ORMs you might start a
SQL query by mistake
How to do it?
Copy the properties needed
from DM to VM
Possibly flatten data
43. How does CQRS work?
Validation
increase likelihood of command
succeeding
validate client-side
optimize validation using persistent view
model
What about user feedback?
Polling: wait until read model is updated
Use asynchronous messaging such as
email
“Your request is being processed. You will
receive an email when it is completed”
Just fake it!
Scope the change to the current user.
Update a local in-memory model
44. How do Commands work?
Commands encapsulate the user’s intent
but do not contain business logic, only
enough data for the command
What makes a good command?
A command is an action – starts with a
verb
The kind you can reply with: “Thank you.
Your confirmation email will arrive shortly”.
Inherently asynchronous.
Commands can be considered messages
Messaging provides an asynchronous
delivery mechanism for the commands. As
a message, it can be routed, queued, and
transformed all independent of the sender
& receiver
45. Domain Model
The domain model is
utilized for processing
commands; it is
unnecessary for
queries.
Unlike entity objects
you may be used to,
aggregate roots in
CQRS only have
methods (no
getters/setters)
46. Events
Events describe changes
in the system state
An Event Bus can be
utilized to dispatch
events to subscribers
Events primary purpose
update the read model
Events can also provider
integration with external
systems
CQRS can also be used in
conjunction with Event
Sourcing.
47. Persistent View Model
Reads are usually the most
common activity – many
times 80-90%. Why not
optimize them?
Read model is based on
how the user wants to see
the data.
Read model can be
denormalized RDBMS,
document store, etc.
Reads from the view model
don’t need to be loaded
into the domain model,
they can be bond directly
to the UI.
48. Persistent View Model
Data Duplicated, No Relationships, Data Pre-Calculated
Customer Service Rep view Supervisor view
List of customers List of customers
ID Name Phone ID Name Phone Lifetime value
Rep_Customers_Table Supervisor_Customers_Table
ID Name Phone ID Name Phone Lifetime Value
49.
50. When should not use
CQRS?
CQRS can be overkill for simple
applications.
Don’t use it in a non-
collaborative domain or where
you can horizontally add more
database servers to support
more users/requests/data at
the same time you’re adding
web servers – there is no real
scalability problem – Udi Dahan
51. When should I use CQRS?
Guidelines for using CQRS:
Large, multi-user systems CQRS is
designed to address concurrency issues.
Scalability matters With CQRS you can
achieve great read and write performance.
The system intrinsically supports scaling
out. By separating read & write
operations, each can be optimized.
Difficult business logic CQRS forces you
to not mix domain logic and
infrastructural operations.
Large or Distributed teams you can split
development tasks between different
teams with defined interfaces.
55. Low Available Compute
Node
Many virtual servers of public clouds are
offered at a low availability. Sometimes,
availability is additionally expressed in an
uncommon manner. For example, Amazon
guarantees an availability of EC2 instances of
99.95% during a service year of 365 days [8].
99.95% means about 4,4h/yr
However, this does not mean that a single
instance has 99.95% availability during this
time period, as could be expected.
Instead, unavailability is defined as the state
when all running instances cannot be reached
longer than five minutes and no replacement
instances can be provisioned.
56. Elastic Infrastructure
Resources shall be assigned to and
revoked from applications dynamically
depending on the current load.
The infrastructure must support dynamic
provisioning and deprovisioning of
resources
This functionality must be offered
through an API to be used by atomized
management tools and the applications
that are hosted by the environment.
An elastic infrastructure supports the
dynamic allocation of (virtual) resources
that constitute a common resource pool.
58. Strict Consistency
A storage offering usually consists of
multiple replicas to ensure fault
tolerance. It is of major importance that
the consistency of the data contained in
these replicas is pertained at all times
while the performance is of secondary
importance.
The highest level of consistency is
granted if all replicas are updated if the
data contained by them is altered.
However, this would mean that the
availability of the overall storage
solution is decreased drastically. It has to
be ensured that it is available even if not
all replicas are available, but still the
correct version of the data is read.
59. Eventual Consistency
Eventually consistent data storage allows
reducing data consistency to increase
availability and performance, since the impact
of network partitioning is reduced and fewer
replicas have to be accessed during read and
write operation.
While strictly consistent databases ensure that
always at least one of the current version is
read, eventually consistent databases allow
that obsolete versions may also be read.
This increases the availability of the storage
offering since only one replica has to be
available to successfully execute a read
operation.
61. CAP (Consistency, Availability, Partition) Theorem
At most two of these properties for any shared-data system
Consistency + Availability
C A • High data integrity
P • Single site, cluster database, LDAP, xFS file system,
etc.
• 2-phase commit, data replication, etc.
Consistency + Partition
C A • Distributed database, distributed locking, etc.
P • Pessimistic locking, minority partition unavailable, etc.
Availability + Partition
C A • High scalability
• Distributed cache, DNS, etc.
P
• Optimistic locking, expiration/leases, etc.
62. Hybrid architectures
• Scale-out (horizontal) Scale-up (vertical)
– BASE: Basically Available, Soft state, ACID: Atomicity, Consistency,
Eventually consistent Isolation, Durability
availability first; best effort
– focus on “commit”
aggressive (optimistic)
– conservative (pessimistic)
transactional
– shared nothing favor accuracy/consistency
– favor extreme size e.g., BI & analytics, financial
– e.g., user requests, data collection & processing, etc.
processing, etc.
Most distributed systems employ both approaches
63. Storage Services
Relations Data Storage
Blob Data Storage
Block Data Storage
NoSQL Storage
64. Relational Data Store
An application uses a central database for
storing data elements and performs
complex queries on them
65. Blob Storage
A distributed application needs to manage large data
elements, such as virtual server images or videos, which are
too large for traditional databases.
In a distributed application data elements must be made
available to all application components and to distributed
users. Access to the data needs to be performed in a
standardized fashion and access control has to be established.
Organize the data elements in a folder hierarchy similar to a
traditional file system. Give each data element a unique
identifier that can be used to access it over a network. Also,
establish access control mechanisms.
66. Block Storage
Resources in clouds are often unreliable (low
available compute nodes).
Therefore, the data that they access locally
shall in fact be stored in a high available central
data store. This way, if a server fails the data is
not lost, but a new server can be started to use
the secured data.
Offer data elements in a central storage that
can be accessed by distributed servers and
integrate them as local drives.
67. NoSQL Storage
Need to handle very large amounts of data
and also need to be adjusted to new user
demands flexibly.
Database solution is required that focuses on
scaling out rather than on optimizing the use
of a single resource and that can adjust
flexibly to changes of the data structure.
Use a schema-free storage solution, with
limited query capabilities to enable extreme
scale-out through easy data replication.
68. Communication Services
Message Oriented Middleware
Reliable Messaging
Exactly Once Delivery
At least Once Delivery
69. Message-oriented middleware
Different applications usually use different languages,
data formats, and technology platforms. When one
application (component) needs to exchange
information with another one, the format of the target
application has to be respected.
Sending messages directly to the target application
results in a tight coupling of sender and receiver since
format changes directly affect both implementations.
Connect applications through an intermediary, the
message oriented middleware, that hides the
complexity of addressing and availability of
communication partners as well as supports
transformation of different message formats.
70. Reliable Messaging
The message transfer from one
communication partner to the other is
performed under transactional context.
Especially, this transaction subsumes the
operation performed to store the messages in
persistent storage.
Thus, if an error occurs during message
receiving, sending, or processing the
transaction can be compensated transferring
the overall system back to a correct and
consistent state.
71. At-least once
The receiver of messages sends special acknowledge
messages to the sender.
If the sender does not receive such an
acknowledgement message in a given time frame it
retransmits the message.
Thus, messages, which are lost due to communication
errors, are still received eventually.
However, duplicate messages can occur, for example, if
an acknowledgement message is lost.
To reduce the communication overhead,
acknowledgement messages can be sent either after
each individual message or after an agreed upon
number of messages.
72. Exaclty-once delivery
Whenever a message is created it is
associated with a unique identifier.
This is used by a filtering component on the
message path to delete duplicates.
It does so by storing the identifiers of
messages it has already seen.
The identifiers of messages passing through
this filtering component are then compared
to the identifiers that have been recorded to
identify and delete duplicates.