The document discusses queuing systems and networks. It describes how queuing networks can model concurrent systems with nodes that include queues and servers. Key concepts discussed include arrival and service rates, stable systems where arrival equals departure rates, and Little's Law relating queue length, arrival rate, and wait time. The document also introduces Flux, a programming language for building high-performance concurrent servers by combining sequential components with defined flows and atomicity.
This is a fully developed simulator capable of numerical simulation of discrete fractures. To our knowledge, this technique has not been previously presented. I would like find partners to develop this for commercial purposes.
Stochastic Processes describe the system derived by noise.
Level of graduate students in mathematics and engineering.
Probability Theory is a prerequisite.
For comments please contact me at solo.hermelin@gmail.com.
For more presentations on different subjects visit my website at http://www.solohermelin.com.
This is a fully developed simulator capable of numerical simulation of discrete fractures. To our knowledge, this technique has not been previously presented. I would like find partners to develop this for commercial purposes.
Stochastic Processes describe the system derived by noise.
Level of graduate students in mathematics and engineering.
Probability Theory is a prerequisite.
For comments please contact me at solo.hermelin@gmail.com.
For more presentations on different subjects visit my website at http://www.solohermelin.com.
Memory Management for High-Performance ApplicationsEmery Berger
Fast and effective memory management is crucial for many applications, including web servers, database managers, and scientific codes. However, current memory managers do not provide adequate support for these applications on modern architectures, severely limiting their performance, scalability, and robustness.
In this talk, I describe how to design memory managers that support high-performance applications. I first address the software engineering challenges of building efficient memory managers. I then show how current general-purpose memory managers do not scale on multiprocessors, cause false sharing of heap objects, and systematically leak memory. I describe a fast, provably scalable general-purpose memory manager called Hoard (available at www.hoard.org) that solves these problems, improving performance by up to a factor of 60.
Asynchronous development in javascript can be a very powerful development paradigm. Ajax applications make use of this paradigm. This presentation will provide an insight about the important things to consider while creating Rich Internet applications
Introduction to Reliability Evaluation Techniques –
Reliability Models for Hardware Redundancy –
Permanent faults only - Transient faults.
Introduction to clock synchronization –
A Non-Fault-Tolerant Synchronization Algorithm –
Fault-Tolerant Synchronization in Hardware –
Completely connected zero propagation time system –
Sparse interconnection zero propagation time system –
Fault tolerant analysis with Signal Propagation delays.
Presented by Vlad Didenko and Don Albrecht at the April 2009 JS.Chi() meetup. A brief overview of the conference, the most interesting sessions we have seen and impressions from the exibition floor.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/01/practical-dnn-quantization-techniques-and-tools-a-presentation-from-facebook/
Raghuraman Krishnamoorthi, Software Engineer at Facebook, presents the “Practical DNN Quantization Techniques and Tools” tutorial at the September 2020 Embedded Vision Summit.
Quantization is a key technique to enable the efficient deployment of deep neural networks. In this talk, Krishnamoorthi presents an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations.
Krishnamoorthi explores simple and advanced quantization approaches and examine their effects on latency and accuracy on various target processors. He also presents best practices for quantization-aware training to obtain high accuracy with quantized weights and activations.
You’ve worked hard to define, develop and execute a performance test on a new application to determine its behavior under load. You have barrels full of numbers. What’s next? The answer is definitely not to generate and send a canned report from your testing tool. Results interpretation and reporting is where a performance tester earns their stripes.
In the first half of this workshop we’ll start by looking at some results from actual projects and together puzzle out the essential message in each. This will be a highly interactive session where we will display a graph, provide a little context, and ask “what do you see here?” We will form hypotheses, draw tentative conclusions, determine what further information we need to confirm them, and identify key target graphs that give us the best insight on system performance and bottlenecks.
In the second half of this session, we’ll try to codify the analytic steps we went through in the first session, and consider a CAVIAR approach for collecting and evaluating test results: Collecting, Aggregating, Visualizing, Interpreting, Analyzing, And Reporting.
Comet: an Overview and a New Solution Called JabbifyBrian Moschel
Brian Moschel delivered this talk at the JS.Chi() April 2009 meetup. This talk provides an overview of Comet, also known as HTTP Push, covering how it works on the server and client, several implementation options, and using a new Comet API called Jabbify in an interactive demo.
Artificial intelligence (AI) has already been attracting the attention of deep tech investors for some years. The reasons why are clear. In its ‘Sizing The Prize’ analysis of artificial intelligence (AI), PwC forecast that AI will contribute $15.7 trillion to the global economy by 2030, with the ‘AI boost’ available to most national economies being approximately 26%. But what investors often overlook is that AI is not singular. Many individual components must work together to create AI.
At its core artificial intelligence consists essentially of detecting statistical patterns in signals with many dimensions, such as analysis of audio frequencies (voice recognition) or high-resolution images (face recognition). The repetition of this search in order to detect these patterns is the basis of artificial intelligence.
There are usually three components to AI:
First, given a data set, learning what the patterns are.
Second, building a model that can detect these patterns.
Third, model deployment to the target environment.
Traditionally, data mining or learning was done by experts in the matter who would develop some sort of classifier or detector based on certain features, and then try to see their correlations. This process was tedious and time consuming.
https://klepsydra.com/cityam-ai-on-the-edge/
Services Oriented Infrastructure in a Web2.0 WorldLexumo
Tom Maguire discusses applying SOA Web 2.0 technologies, and open standards to the problems faced by IT in an ever changing world.
This session was recorded at EMC World 2007 in Orlando Florida
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Similar to Operating Systems - Queuing Systems (20)
Doppio: Breaking the Browser Language BarrierEmery Berger
Web browsers have become a de facto universal operating system, and JavaScript its instruction set. Unfortunately, running other languages in the browser is not generally possible. Translation to JavaScript is not enough because browsers are a hostile environment for other languages. Previous approaches are either non-portable or require extensive modifications for programs to work in a browser.
This talk presents Doppio, a JavaScript-based runtime system that makes it possible to run unaltered applications written in general- purpose languages directly inside the browser. Doppio provides a wide range of runtime services, including a file system that enables local and external (cloud-based) storage, an unmanaged heap, sockets, blocking I/O, and multiple threads. We demonstrate Doppio's usefulness with two case studies: we extend Emscripten with Doppio, letting it run an unmodified C++ application in the browser with full functionality, and present DoppioJVM, an interpreter that runs unmodified JVM programs directly in the browser. While substantially slower than a native JVM, DoppioJVM makes it feasible to directly reuse existing, non compute-intensive code.
Dthreads is an efficient deterministic multithreading system for unmodified C/C++ applications that replaces the pthreads library. Dthreads enforces determinism in the face of data races and deadlocks. It is easy to use: just link your program with -ldthread instead of -lpthread.
Dthreads can be downloaded from its source code repo on GitHub (https://github.com/plasma-umass/dthreads). A technical paper describing Dthreads appeared at SOSP 2012 (https://github.com/plasma-umass/dthreads/blob/master/doc/dthreads-sosp11.pdf?raw=true).
Multithreaded programming is notoriously difficult to get right. A key problem is non-determinism, which complicates debugging, testing, and reproducing errors. One way to simplify multithreaded programming is to enforce deterministic execution, but current deterministic systems for C/C++ are incomplete or impractical. These systems require program modification, do not ensure determinism in the presence of data races, do not work with general-purpose multithreaded programs, or run up to 8.4× slower than pthreads.
This talk presents Dthreads, an efficient deterministic multithreading system for unmodified C/C++ applications that replaces the pthreads library. Dthreads enforces determinism in the face of data races and deadlocks. Dthreads works by exploding multithreaded applications into multiple processes, with private, copy-on-write mappings to shared memory. It uses standard virtual memory protection to track writes, and deterministically orders updates by each thread. By separating updates from different threads, Dthreads has the additional benefit of eliminating false sharing. Experimental results show that Dthreads substantially outperforms a state-of-the-art deterministic runtime system, and for a majority of the benchmarks we evaluated, matches and occasionally exceeds the performance of pthreads.
Humans can perform many tasks with ease that remain difficult or impossible for computers. Crowdsourcing platforms like Amazon's Mechanical Turk make it possible to harness human-based computational power on an unprecedented scale. However, their utility as a general-purpose computational platform remains limited. The lack of complete automation makes it difficult to orchestrate complex or interrelated tasks. Scheduling human workers to reduce latency costs real money, and jobs must be monitored and rescheduled when workers fail to complete their tasks. Furthermore, it is often difficult to predict the length of time and payment that should be budgeted for a given task. Finally, the results of human-based computations are not necessarily reliable, both because human skills and accuracy vary widely, and because workers have a financial incentive to minimize their effort.
This talk presents AutoMan, the first fully automatic crowdprogramming system. AutoMan integrates human-based computations into a standard programming language as ordinary function calls, which can be intermixed freely with traditional functions. This abstraction allows AutoMan programmers to focus on their programming logic. An AutoMan program specifies a confidence level for the overall computation and a budget. The AutoMan runtime system then transparently manages all details necessary for scheduling, pricing, and quality control. AutoMan automatically schedules human tasks for each computation until it achieves the desired confidence level; monitors, reprices, and restarts human tasks as necessary; and maximizes parallelism across human workers while staying under budget.
AutoMan is available for download at www.automan-lang.org.
Heap-based attacks depend on a combination of memory management errors and an exploitable memory allocator. Many allocators include ad hoc countermeasures against particular exploits, but their effectiveness against future exploits has been uncertain.
This paper presents the first formal treatment of the impact of allocator design on security. It analyzes a range of widely-deployed memory allocators, including those used by Windows, Linux, FreeBSD, and OpenBSD, and shows that they remain vulnerable to attack. It then presents DieHarder, a new allocator whose design was guided by this analysis. DieHarder provides the highest degree of security from heap-based attacks of any practical allocator of which we are aware, while imposing modest performance overhead. In particular, the Firefox web browser runs as fast with DieHarder as with the Linux allocator.
Quantifying the Performance of Garbage Collection vs. Explicit Memory ManagementEmery Berger
This talk answers an age-old question: is garbage collection faster/slower/the same speed as malloc/free? We introduce oracular memory management, an approach that lets us measure unaltered Java programs as if they used malloc and free. The result: a good GC can match the performance of a good allocator, but it takes 5X more space. If physical memory is tight, however, conventional garbage collectors suffer an order-of-magnitude performance penalty.
Introduces bookmarking collection, a GC algorithm that works with the virtual memory manager to eliminate paging. Just before memory is paged out, the collector "bookmarks" the targets of pointers from the pages. Using these bookmarks, BC can perform full garbage collections without loading the pages back from disk. By performing in-memory garbage collections, BC can speed up Java programs by orders of magnitude (up to 41X).
DieHard: Probabilistic Memory Safety for Unsafe LanguagesEmery Berger
DieHard uses randomization and replication to transparently make C and C++ programs tolerate a wide range of errors, including buffer overflows and dangling pointers. Instead of crashing or running amok, DieHard lets programs continue to run correctly in the face of memory errors with high probability. Using DieHard also makes programs highly resistant to heap-based hacker attacks. Downloadable at www.diehard-software.org.
Exterminator: Automatically Correcting Memory Errors with High ProbabilityEmery Berger
Exterminator automatically corrects heap-based memory errors without programmer intervention. It exploits randomization and replication (or multiple users) to pinpoint errors with high precision. From this information, Exterminator derives runtime patches that fix these errors in current and subsequent executions.
Exploiting Multicore CPUs Now: Scalability and Reliability for Off-the-shelf ...Emery Berger
Multiple core CPUs are here. Conventional wisdom holds that, to take best advantage of these processors, we now need to rewrite sequential applications to make them multithreaded. Because of the difficulty of programming correct and efficient multithreaded applications (e.g., race conditions, deadlocks, and scalability bottlenecks), this is a major challenge.
This talk presents two alternative approaches that bring the power of multiple cores to today's software. The first approach focuses on building highly-concurrent client-server applications from legacy code. I present a system called Flux that allows users to take unmodified off-the-shelf *sequential* C and C++ code and build concurrent applications. The Flux compiler combines the Flux program and the sequential code to generate a deadlock-free, high-concurrency server. Flux also generates discrete event simulators that accurately predict actual server performance under load. While the Flux language was initially targeted at servers, we have found it to be a useful abstraction for sensor networks, and I will briefly talk about our use of an energy-aware variant of Flux in a deployment on the backs of endangered turtles. The second approach uses the extra processing power of multicore CPUs to make legacy C/C++ applications more reliable. I present a system called DieHard that uses randomization and replication to transparently harden programs against a wide range of errors, including buffer overflows and dangling pointers. Instead of crashing or running amok, DieHard lets programs continue to run correctly in the face of memory errors with high probability. This is joint work with Brendan Burns, Kevin Grimaldi, Alex Kostadinov, Jacob Sorber, and Mark Corner (University of Massachusetts Amherst), and Ben Zorn (Microsoft Research).
Composing High-Performance Memory Allocators with Heap LayersEmery Berger
Heap Layers is a template-based infrastructure for building high-quality, fast memory allocators. The infrastructure is remarkably flexible, and the resulting memory allocators are as fast or faster than counterparts written in conventional C or C++. We have built several industrial-strength allocators using Heap Layers, including Hoard (which now includes the Heap Layers infrastructure) and DieHard.
This talk presents an extensive experimental study that shows that a good general-purpose allocator is better than almost all commonly-used custom allocators, with one exception: regions (a.k.a., pools, arenas, zones). However, it shows that regions consume much more memory than necessary. The talk then introduces reaps (regions + heaps), which combine the flexibility and space efficiency of heaps with the performance of regions.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Operating Systems - Queuing Systems
1. Operating Systems
CMPSCI 377
Queuing Systems
Emery Berger
University of Massachusetts Amherst
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
2. Queuing Systems & Servers
Queuing systems
High-level model of concurrent applications
Flux
Language for building servers
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 2
3. Queuing Networks
Model of tasks or services
Node includes queue (line) & server
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 3
4. Queuing Networks
Model of tasks or services
Node includes queue (line) & server
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 4
5. Queuing Networks
Model of tasks or services
Node includes queue (line) & server
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 5
6. Queuing Networks
Model of tasks or services
Node includes queue (line) & server
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 6
7. Queuing Networks
Model of tasks or services
Node includes queue (line) & server
arrival rate
()
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 7
8. Queuing Networks
Model of tasks or services
Node includes queue (line) & server
waiting time
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 8
9. Queuing Networks
Model of tasks or services
Node includes queue (line) & server
service time
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 9
10. Queuing Networks
Model of tasks or services
Node includes queue (line) & server
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 10
11. Stable Systems
Stable queuing system:
arrival rate = departure rate
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 11
12. Stable Systems
Stable queuing system:
arrival rate = departure rate
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 12
13. Stable Systems
Stable queuing system:
arrival rate = departure rate
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 13
14. Stable Systems
Stable queuing system:
arrival rate = departure rate
What happens if > departure rate?
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 14
15. Stable Systems
Stable queuing system:
arrival rate = departure rate
What happens if > departure rate?
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 15
16. Stable Systems
Stable queuing system:
arrival rate = departure rate
What happens if > departure rate?
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 16
17. Stable Systems
Stable queuing system:
arrival rate = departure rate
What happens if > departure rate?
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 17
18. Networks of Queues
Can build system from connected servers
Latency = time for one thing to get through
Throughput = service rate
5/sec 5/sec 5/sec
Throughput?
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 18
19. Networks of Queues
Can build system with numerous
connected servers
Latency = time for one thing to get through
Throughput = service rate
5/sec 1/sec 5/sec
Throughput?
Lowest throughput = bottleneck
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 19
20. Little’s Law
Little’s Law – applies to any “blackbox”
server
Queue length (N) =
arrival rate () * average waiting time (T)
N=T
N
T
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 20
21. Applications of Little’s Law
Compute waiting time to get into
restaurant, bar, etc.
If N = 20 people in front of you,
= departure rate = 1 / 5 min.,
how long will you wait in line?
N=T
N
T
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 21
22. Applications of Little’s Law
Required service time?
Arrival rate = one job @ 500 ms
Average queue length = 10
T=?
What’s the average latency?
N=T
N
T
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 22
23. Applications of Little’s Law
Required service time?
Arrival rate = one job @ 500 ms
Average queue length = 5
T=?
What’s the average latency?
N=T
N
T
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 23
24. Motivating Example: Image Server
image
server
Client
Requests image @ desired
quality, size not found
Server
Images: RAW
http://server/Easter-bunny/
200x100/75
Compresses to JPG
Caches requests
Sends to client
client
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
25. Problem: Concurrency
image
Could write sequential code server
but…
More clients (latency)
Bigger server
Multicores, multiprocessors
One approach: threads
Limit reuse,
risk deadlock,
burden programmer
Complicate debugging
Mixes program logic &
concurrency control clients
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
26. The Flux Programming Language
High-performance & deadlock-free
concurrent programming w/ sequential components
Flux = Components + Flow + Atomicity
Components unmodified C, C++ (or Java)
Flow implicitly || path thru components
Atomicity high-level mutual exclusion
Compiler generates:
Deadlock-free, runtime-independent server
Threads, thread pools, events, …
Path profiling
Discrete event simulator
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
27. Flux Outline
Intro to Flux: building a server
Components
Flows
Atomicity
Performance results
Server performance
Performance prediction (QNMs)
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
28. Flux Server “Main”
Source nodes originate flows
Conceptually in separate thread
Executes inside implicit infinite loop
Here: initiates flow for each image request
source Listen Image;
Listen
image server
ReadRequest Compress Write Complete
ReadRequest Compress Write Complete
ReadRequest Compress Write Complete
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
29. Flux Image Server
Basic image server requires:
HTTP parsing (http)
Socket handling (socket)
JPEG compression (libjpeg)
All UNIX-style C libraries
Abstract node = flow across nodes
Concrete or abstract
Image = ReadRequest Compress Write Complete;
image server
ReadRequest Compress Write Complete
http libjpeg socket http
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
30. Control Flow
Direct flow via user-supplied predicate types
Type test applied to output
Note: no variables – dispatch on output “type”
Here: cache frequently requested images
typedef hit TestInCache;
Handler:[_,_,hit] = ;
Handler:[_,_,_] = ReadFromDisk Compress StoreInCache;
hit handler
ReadRequest CheckCache Write Complete
Listen
handler
StoreInCache
ReadInFromDisk Compress
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
31. Supporting Concurrency
Many clients = concurrent flows
Must keep cache consistent
Atomicity constraints
Same name = mutual exclusion
Apply to nodes or whole flow (abstract node)
atomic CheckCache {};
{, };
atomic Complete
atomic StoreInCache {};
CheckCache hit
ReadRequest Write Complete
CheckCache hit hit
ReadRequest Write Complete
CheckCache hit Writehandler
ReadRequest Complete
Listen ReadInFromDisk Compress StoreInCache
ReadRequest CheckCache StoreInCache Write Complete
ReadInFromDisk Compress
ReadInFromDisk Compress StoreInCache
handler
StoreInCache
ReadInFromDisk Compress
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
32. More Atomicity
Reader / writer constraints
Multiple readers or single writer (default)
atomic ReadList: {listAccess?};
atomic AddToList: {listAccess!};
Per-session constraints
User-supplied function ≈ hash on source
Added to flow ≈ chooses from array of locks
atomic AddHasChunk: {chunks(session)};
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
33. Preventing Deadlock
Naïve execution can deadlock
atomic A: {z,y}; atomic A: {y,z};
atomic B: {y,z}; atomic B: {y,z};
Establish canonical lock order
Partial order
Alphabetic by name
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 33
34. Preventing Deadlock, II
Harder with abstract nodes
A = B;
C = D;
A:{z}
A
A{z}
atomic A{z}; C{y}
C
B
atomic B{y};
atomic C{y,z};
Solution: Elevate constraints; fixed point
A = B;
C = D; A{y,z}
C
atomic A{y,z};
B
atomic B{y};
atomic C{y,z};
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 34
35. Handling Errors
What if image requested doesn’t exist?
Error = negative return value from component
Remember – nodes oblivious to Flux
Solution: error handlers
Go to alternate paths on error
Possible extension – can match on error paths
handle error ReadInFromDisk FourOhFour;
hit
handler
ReadRequest CheckCache Write Complete
Listen handler
StoreInCache
ReadInFromDisk Compress
FourOhFour
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
36. Almost Complete Flux Image Server
source Listen Image;
Image =
ReadRequest CheckCache Handler Write Complete;
Handler[_,_,hit] = ;
Handler[_,_,_] = ReadFromDisk Compress StoreInCache;
atomic CheckCache: {cacheLock};
atomic StoreInCache: {cacheLock};
atomic Complete: {cacheLock};
handle error ReadInFromDisk FourOhFour;
Concise, readable expression of server logic
No threads, etc.: simplifies programming, debugging
image hit
handler
server ReadRequest CheckCache Write Complete
Listen
handler
StoreInCache
ReadInFromDisk Compress
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
37. Flux Outline
Intro to Flux: building a server
Components, flow
Atomicity, deadlock avoidance
Performance results
Server performance
Performance prediction
Future work
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
38. Flux Results
Four servers:
Image server (+ libjpeg) [23 lines of Flux]
Multi-player online game [54]
BitTorrent (2 undergrads: 1 week!) [84]
Web server (+ PHP) [36]
Evaluation
Benchmark: variant of SPECweb99
Three different runtimes here
Thread: one per connection
Thread pool: fixed max # threads
Event-driven: helper threads for blocking calls
Compared to Capriccio [SOSP03], SEDA [SOSP01]
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 38
39. Web Server
UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 39