Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Towards Achieving Execution Time Predictability in Web Services Middleware
1. Towards Achieving Execution Time Predictability
in Web Services Middleware
Vidura Gamini Abhaya
Supervised by
Prof. Zahir Tari and Assoc. Prof. Peter Bertok
Distributed Systems and Networking Group
School of Computer Science and IT
RMIT University
Melbourne, Australia
15th June 2012
-: Completion Seminar :-
2. Problem Overview Solutions Conclusion
Presentation Structure
1 The Problem Area
2 Research Overview
3 Solutions
4 Conclusion
3. Problem Overview Solutions Conclusion
The Problem
Evolution of the Internet
Transition from:
User-Centric → Application Centric → Fully Automated Web
* Facilitated by the use of Web services.
Example:
4. Problem Overview Solutions Conclusion
The Problem
Web Services Middleware
Container applications services are hosted in. Manages all aspects
of their execution.
Characteristics
Optimised for throughput by design
Requests are accepted unconditionally
Uses thread-pools to execute them in a best-effort manner
Processes multiple requests in-parallel using processor-sharing
* They result in inconsistent execution times
5. Problem Overview Solutions Conclusion
The Problem
Average Execution Time by No. of Requests Executing in Parallel
30000
Average Execution Time (ms)
20000
5000 10000
0
10 20 30 40
Number of Requests
Service has a linear execution time complexity.
Started with 2 requests sharing the processor and increased by five upto 47
requests sharing the processor.
6. Problem Overview Solutions Conclusion
The Problem
Why is predictability of execution important in web services?
To achieve consistent service execution times.
To make service selections based on guaranteed execution times
Prevent clients from being set-up for failure.
To open up new application areas that require predictability in execution, such
as Industrial control systems, Avionics, Robotics and Medical diagnostic systems
to the use of web services as a communications platform.
7. Problem Overview Solutions Conclusion Aim and Scope Research Questions Related Work
Research Overview
What is predictability of execution?
Execution of a web service completing within a given deadline in a
consistent and repeatable manner.
Aim
Achieve predictability of service execution in stand-alone and
cluster-based web services middleware
Scope of Research
Use real-time scheduling principles.
Achieving Predictability is limited to the execution within the web services
middleware.
We assume no delays are experienced on the network.
8. Problem Overview Solutions Conclusion Aim and Scope Research Questions Related Work
Research Questions
1 How can predictability of execution be achieved in stand-alone
web services middleware?
2 How can predictability of execution be achieved in
cluster-based web service deployments?
3 How can web services middleware be engineered to have
predictable execution times?
4 How can performance models for such systems be derived and
compared against other techniques?
9. Problem Overview Solutions Conclusion Aim and Scope Research Questions Related Work
Related Work
Can be categorised broadly into the following,
Admission control mechanisms used to control execution time in web services
[Dyachuk et al., 2007], [Elnikety et al., 2004], [Carlstrom and Rom, 2002], [Erradi and Maheshwari, 2005]
Do not consider any predictability attributes such as a deadline or laxity, in the decisions.
Execution time QoS on stand-alone servers
[Sharma et al., 2003], [Ching-Ming Tien, 2005]
Achieves some level of differentiated processing, but do not consider any predictability attributes.
Request dispatching techniques aimed at improving execution time
[Pacifici et al. 2005], [Garc´ et al., 2009], [Gmach et al., 2008], [Cao et al., 2010]
ıa
Achieves execution times defined in SLAs in a probabilistic manner. Execution times can be inconsistent.
Web services middleware using real-time scheduling
[Helander and Sigurdsson, 2005], [Mathes et al., 2009]
Predictability is achieved in closed environments, Task properties are known at design time of the system.
Performance models for systems using deadline based scheduling
[Li et al., 2007], [Kargahi and Movaghar, 2006], [Chen and Decreusefond, 1996]
Considers M/M/1 systems which assumes service times to be exponentially distributed. Not a good
representation of web service workloads. Only non-preemptive systems are considered.
10. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Question 1
How can predictability of execution be achieved in stand-alone web
services middleware?
11. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Real-time Scheduling
Arrival Time Deadline
Execution
Waiting
Time Start Time End Time
Laxity
The ability to delay the execution of a task without compromising its deadline.
Laxity = Deadline − Arrival Time − Exec. Time Requirement
Exec. Time Requirement
Can also be defined as, Laxity = (Deadline − Arrival Time)
12. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Predictability of Execution in Stand-alone Middleware
Introduction of an execution deadline
Laxity based admission control
Earliest Deadline First (EDF) scheduling
0 1 2 3 4 5 6 7 8 9 10 1112 13 1415 16 17 18 19 20 21 22 23 24 25
T1
T2
T3
Start Time End Time
T4
T5
Arrival
Time Execution Deadline
13. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Predictability of Execution in Stand-alone Middleware
Laxity based admission control - Components of analytical model
Remaining Execution Time
Running Laxity - Laxity of a task at a given time
Processor Demand - Units of processing time required (within
a time period)
Loading Factor - Ratio between processor demand and
processing resources available (within a time period)
14. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Predictability of Execution in Stand-alone Middleware
0 1 2 3 4 5 6 7 8 9 10 1112 13 1415 16 17 18 19 20 21 22 23 24 25
T1
a1 d1
T2
a2 d2
T3
d3
a3
T4 C4
a4
s4 e4 d4
Reference Point
(Arrival of T4)
Laxity based admission control - Components of Analytical Model
Examples:
Remaining Exec. Time (R),
R3 = 3 − 1 = 2 (Remaining Exec. Time of T3)
R2 = 6 − 2 = 4
Running Laxity (L),
L2 = (d2 − a2 ) − 2 = 17 (Running Laxity of T2)
L1 = (d1 − a1 ) − 1 = 24
16. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Laxity based admission control
Arrival of
a Request Repeated seperately, for each request
0 1 2 3 4 5 6 7 8 9 10 1112 13 1415 16 17 18 19 20 21 22 23 24 25 finishing after the new request
T1
T2
Get accepted Get accepted
T3 requests finishing requests finishing
T4 within the lifespan after the
of the new request new request
Calculate Calculate
Loading Factor
Loading Factor
between arrival time
within lifespan of new req. and
of new request deadline of old req.
Can the Will the
new task be new task
Yes compromise deadlines
scheduled to meet Yes
it's deadline? of these requests?
No
No
Request Rejected Request Accepted
17. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Laxity based admission control
Arrival of
a Request Repeated seperately, for each request
0 1 2 3 4 5 6 7 8 9 10 1112 13 1415 16 17 18 19 20 21 22 23 24 25 finishing after the new request
T1
T2
Get accepted Get accepted
T3 requests finishing requests finishing
T4 within the lifespan after the
of the new request new request
Calculate Calculate
Loading Factor
Loading Factor
between arrival time
within lifespan of new req. and
of new request deadline of old req.
At the arrival of T4
Proc. Demand within T4 = 6
Loading Factor within T4 = 0.86 ≤ 1 Will the
Can the
new task be new task
Yes compromise deadlines
scheduled to meet Yes
it's deadline? of these requests?
No
No
Request Rejected Request Accepted
18. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Laxity based admission control
Arrival of
a Request Repeated seperately, for each request
0 1 2 3 4 5 6 7 8 9 10 1112 13 1415 16 17 18 19 20 21 22 23 24 25 finishing after the new request
T1
T2
Get accepted Get accepted
T3 requests finishing requests finishing
T4 within the lifespan after the
of the new request new request
Calculate Calculate
Loading Factor
Loading Factor
between arrival time
within lifespan of new req. and
of new request deadline of old req.
At the arrival of T4
Proc. Demand between T4 and deadline of T2 = 10
Loading Factor between T4 and deadline of T2 = 0.62 ≤ 1 Will the
Can the
new task be new task
Yes compromise deadlines
scheduled to meet Yes
it's deadline? of these requests?
No
No
Request Rejected Request Accepted
19. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Laxity based admission control
Arrival of
a Request Repeated seperately, for each request
0 1 2 3 4 5 6 7 8 9 10 1112 13 1415 16 17 18 19 20 21 22 23 24 25 finishing after the new request
T1
T2
Get accepted Get accepted
T3 requests finishing requests finishing
T4 within the lifespan after the
of the new request new request
Calculate Calculate
Loading Factor
Loading Factor
between arrival time
within lifespan of new req. and
of new request deadline of old req.
At the arrival of T4
Proc. Demand between T4 and deadline of T1 = 14
Loading Factor between T4 and deadline of T1 = 0.82 ≤ 1 Will the
Can the
new task be new task
Yes compromise deadlines
scheduled to meet Yes
it's deadline? of these requests?
No
No
Request Rejected Request Accepted
20. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Laxity based admission control
Arrival of
a Request Repeated seperately, for each request
0 1 2 3 4 5 6 7 8 9 10 1112 13 1415 16 17 18 19 20 21 22 23 24 25 finishing after the new request
T1
T2
Get accepted Get accepted
T3 requests finishing requests finishing
T4 within the lifespan after the
of the new request new request
T4
Calculate Calculate
Loading Factor
Loading Factor
between arrival time
within lifespan of new req. and
of new request deadline of old req.
Can the Will the
new task be new task
Yes compromise deadlines
scheduled to meet Yes
it's deadline? of these requests?
No
No
Request Rejected Request Accepted
21. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Laxity based admission control
Arrival of
a Request Repeated seperately, for each request
0 1 2 3 4 5 6 7 8 9 10 1112 13 1415 16 17 18 19 20 21 22 23 24 25 finishing after the new request
T1
T2
Get accepted Get accepted
T3 requests finishing requests finishing
T4 within the lifespan after the
of the new request new request
T4
Calculate Calculate
Loading Factor
Loading Factor
between arrival time
within lifespan of new req. and
of new request deadline of old req.
At the arrival of T5
Proc. Demand within T5 = 6
6
Loading Factor within T5 = 12−7 = 1.2 > 1 ×
Can the Will the
new task be new task
Yes compromise deadlines
scheduled to meet Yes
it's deadline? of these requests?
No
No
Request Rejected Request Accepted
22. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Stand-alone middleware performance
RT-Axis2 accepts between 18.1% (fastest) and 96.7% (slowest) of the requests.
Unmod. Axis2 RT-Axis2
Inter-arrival times % Acc. % D. Met off % Acc. % Acc. % D. Met off % Acc.
(sec)
1.125 (Low) 100 36.2 96.7 100
0.75 62.4 18.3 58.6 100
0.3 55.1 9.1 30.7 99.7
0.175 (High) 28.7 8.8 18.1 96.7
24. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Stand-alone middleware performance
Comparison of Resultant Laxities − Standalone Setup
RT−Axis2 (RT) vs Axis2 (A)
Comparison of Throughput - Unmod Axis2 vs RT-Axis2
6
Unmod. Axis2
A RT-Axis2
(1.125s) RT-Axis2 (excl. rej.)
RT 5
(1.125s)
Throughput (Tasks per second)
A
(0.625s) 4
RT
(0.625s)
3
A
(0.3s)
RT 2
(0.3s)
A
(0.175s) 1
RT
(0.175s)
0
1.125s 0.625s 0.300s 0.175s
0 2 4 6 8 10 12 Mean inter-arrival times (sec)
Laxity
Unmod. Axis2 RT-Axis2
Mean inter-arrival Throughput (sec−1 ) Throughput (sec−1 ) Throughput (excl.
time rejected)
1.125s (Low) 0.98 0.91 0.88
0.625s 0.83 1.62 0.95
0.300s 0.72 3.40 1.04
0.175s (High) 0.69 5.64 1.02
25. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Question 2
How can predictability of execution be achieved in cluster-based
web service deployments?
26. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Predictability in Cluster-based Middleware
Dispatching Algorithms - Objectives
*Ensures deadline requirement of tasks could be met
*Distribute requests evenly among executors based on a condition
*Avoids executors going into overload conditions
4 algorithms used
RT-RoundRobin
RT-Sequential
RT-ClassBased
RT-LaxityBased
27. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
RT-RoundRobin
Details
Requests assignment cycles
through all executors in RR
fashion
Sched. check is done only once
per request
If sched. check fails on selected
server → request rejected
Hightlights
POC for how a simple dispatching algorithm could be made real-time ready
Processing overhead is minimum
28. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
RT-Sequential
Details
Requests are assigned in a
sequential manner
Req’s sent to one executor till
sched. check fails, then assigned
to the second and so on
Sched. check happens against
multiple executors till a request
can be assigned
Highlights
If it’s possible to schedule a job within the cluster, it will be guaranteed
29. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
RT-ClassBased
Details
Requests are divided into classes
based on a priority scheme
Mapping of requests to executors
is pre-determined and defined
offline
Sched. check is only considered
with the assigned executor
Reference implementation uses
priorities based on task sizes
Highlights
Results in the reduction of variance of task sizes at each executor
Pre-defined mapping of request sizes to executors could be done using
pre-profiled execution times or execution time history
30. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
RT-LaxityBased
Details
Laxity = Ability of delaying a
request whilst still meeting its
deadline
Higher the laxity the more
requests an executor could service
Sched. check is done only against
the assigned executor
Highlights
Distribution of requests results in a broad range of laxities at each executor
Keeps track of the last two laxities assigned and prevents them being assigned
for the same executor consecutively
31. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
RT-RoundRobin vs Round-Robin
Task Size distribution between 3 Executors (1 - 5000000 (Uniform); 0.1sec - 0.5sec (Uniform) roundrobin 3 exec)
12000
Executor 1
Executor 2
Executor 3
10000
8000
Execution Time (ms)
6000
4000
2000
0
0 500000 1e+06 1.5e+06 2e+06 2.5e+06 3e+06 3.5e+06 4e+06 4.5e+06 5e+06
Task Size
RT-RR accepts between 20.5% (2 Exec. - fastest) and 99.9% (4 Exec. -
slowest) of the requests.
32. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
RT-ClassBased vs Class Based
CPU utilisation at each Executor - (1 - 5000000 (Uniform); 0.25sec - 1sec (Uniform) classbased 3 exec)
90
Executor 1
Executor 2
80 Executor 3
70
60
CPU Utilization %
50
40
30
20
10
0
0 50 100 150 200 250 300 350 400
Sample #
RT-CB accepts between 28.6% (2 Exec. - fastest) and 100% (4 Exec. - slowest) of the requests.
33. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
RT-Sequential and RT-Laxity
Highest acceptance rate for any algorithm for 2 executors with highest arrival
rate is 38.5% for RT-LaxityBased
RT-Sequential records the lowest percentage of deadlines met out of all
algorithms. However, given its best acceptance rates out of all, the average
number of requests meeting its deadline is second to only RT-LaxityBased.
34. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Question 3
How can web services middleware be engineered to have
predictable execution times?
35. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Infrastructure
Dev Platform and OS
Solaris 10 08/05 is used as the Real-time OS
Sun Real-time Java Specification is used as the Dev Platform
36. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Introduction of the Deadline
<?xml version='1.0' encoding='UTF-8'?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> SOAP Envelope
<soapenv:Header>
SOAP Header
<ns1:RealTimeParams xmlns:ns1="http://endpoint.testservice">
<ns1:Deadline>70</ns1:Deadline> Header
<ns1:Period>0</ns1:Period>
<ns1:clientid>Client1</ns1:clientid>
<ns1:ExecTime>28</ns1:ExecTime>
Header
</ns1:RealTimeParams>
</soapenv:Header>
SOAP Body
<soapenv:Body>
<ns1:primeCount xmlns:ns1="http://endpoint.testservice">
<ns1:primeLimt>102155</ns1:primeLimt> Payload
</ns1:primeCount>
</soapenv:Body>
</soapenv:Envelope>
An example of how the processing deadline could be conveyed to
the middleware - using SOAP headers
37. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Priority Model
Three priority levels are introduced (used by the Real-time Scheduler)
High - Cannot be interrupted by GC - Used on a thread to allow execution
Mid - Can be interrupted by the GC - Used on a thread to prevent execution
Low - Can be interrupted by the GC, Highest priority level available on Standard
Java - Used for meta data requests i.e. WSDL
38. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Enhancements made to Axis2
Details
Thread-pools have been replaced with RT-Thread
pools
A Real-time scheduler has been introduced
Multiple execution lanes are used
39. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Enhancements made to Synapse
Details
Enhanced version of Synapse used for dispatching
Thread-pools have been replaced with RT-Thread pools
A Real-time scheduler has been introduced
Multiple execution lanes are used
RT-Axis2 instances used as Executors
Request processing happens at Dispatcher, Service Invocation at Executors
40. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Minimising Priority Inversions
Can be caused by, I/O activities such as reading/writing to files and sockets.
To prevent priority inversions,
Avoid outputs resulting in on-screen messages or log file writes
Use in-memory logging and delayed writes
Use offline debugging techniques/tools
e.g. Oracle Thread Scheduling Visualizer (TSV)
41. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Question 4
How can performance models for such systems be derived and
compared against other techniques?
42. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Performance model
Why model the system?
To derive different performance attributes not based on deadlines
To better compare the performance of techniques used, to others that do not
share the same performance attributes
To investigate the behaviour of the system analytically, without having to turn
to implementations
43. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
System characteristics
Uses EDF scheduling
Arbitrary number of priority classes
Executions are preemptive-resume (work-conserving)
Waiting time considered to be the primary performance indicator
Requests have arbitrary service times
Requests arrive according to a Poisson process
* The system is modelled as a preemptive-resume M/G /1/./EDF queue.
44. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
System parameters
N number of priority classes
Each class assigned with a constant deadline offset. i.e a task from stream i on
its arrival at the system at time t will have a deadline offset of t + di
Priority of a stream is decided by the associated deadline. i.e. i is considered
higher priority if i < j ⇒ di ≤ dj
Difference between deadline offsets denoted by Dj,i = dj − di .
Queueing discipline among priority classes is EDF and within the class is FCFS.
Each priority class has a different arrival rate denoted by λi
The resultant load for each priority class is denoted by ρi
* The work of Chen K. and Decreusefond L., that approximates waiting time for a
non-preemptive EDF system. Our model, extend their work to a preemptive-resume
system.
45. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Performance Model
i i N i −1
Wi = W0 + k=1 ρk Wk + k=i +1 ρk max(0, Wk − Dk,i ) + k=1 ρk min(Wi , Di ,k )
Parameters
In the point of view of a newly arriving request (tagged request)
- Mean waiting time experienced by stream i requests
- Mean residual service time as experienced by stream i requests
- Requests from higher priority classes found in the system by a newly arrived task and are served prior to
the newly arrived.
- Requests from lower priority classes found in the system by a newly arrived task and are served prior to the
newly arrived.
- Requests from higher priority classes arriving at the system after a given task and being serviced prior to it.
t1 t 1 + d1
I
t2 t 2+ d2
Newly Arrived
Request J
( d2- d1)
46. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Performance Model
i i N i −1
Wi = W0 + k=1 ρk Wk + k=i +1 ρk max(0, Wk − Dk,i ) + k=1 ρk min(Wi , Di ,k )
Parameters
In the point of view of a newly arriving request (tagged request)
- Mean waiting time experienced by stream i requests
- Mean residual service time as experienced by stream i requests
- Requests from higher priority classes found in the system by a newly arrived task and are served prior to
the newly arrived.
- Requests from lower priority classes found in the system by a newly arrived task and are served prior to the
newly arrived.
- Requests from higher priority classes arriving at the system after a given task and being serviced prior to it.
t1 t 1 + d1
I
J
t2 t 2+ d2
( d2- d1)
47. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Mean residual service time
For a preemptive M/G /1 queue (not using EDF),
i −1
Ci = Xi + j=1 (λj Ci Xj )
Parameters
- Ci - Mean time required to complete service for a stream i request, including
the time it is preempted
- Xi - Mean of the service time distribution for stream i requests
48. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Mean residual service time
Ci Ci
s si
i −1
Ci = Xi + j=1 (ρj min(Di ,j , Ci ))
Let j ∗ = subscript j = 1, 2, ..i − 1 such that Di ,j ≤ Ci
′
Let j = subscript j = 1, 2, ..i − 1 such that Di ,j > Ci
Ci = Xi + j ∗ (ρj ∗ Di ,j ∗ ) + j
′ (ρ ′
j
Ci )
Xi + j ∗ (ρj ∗ Di ,j ∗ )
Ci = (1− ′ ρ ′)
j j
49. Problem Overview Solutions Conclusion Q1 Q2 Q3 Q4
Mean residual service time
Let Pi be the probably of a request from stream i being in service at an arrival. Pi
could be defined as:
Pi = λi Ci
Mean residual service time of a system can be defined as the the sum of all
probabilities of a job of a given class is in service, times the mean residual service time
i
for the given class [Kleinrock, 1976]. Therefore, we could define W0 ,
i i
W0 = k=1 Pk Rk
Replacing Pk we get,
i ρi + j ∗ (ρj ′ Di ,j ∗ )λi
i
W0 = k=1 Rk (1− ′ ρ ′)
j j
2
Xi 2
where Ri = for an M/G /1 system. (Xi is the second moment of the service time distribution for
2Xi
class i requests.
54. Problem Overview Solutions Conclusion Contributions Outcomes Future Work Acknowledgements
Summary of Contributions
1 How can predictability of execution be achieved in stand-alone web services
middleware?
Mathematical model and algorithm for run-time laxity based admission control.
Introduction of deadline based scheduling.
2 How can predictability of execution be achieved in cluster-based web service
deployments?
Four request dispatching algorithms based on the laxity property.
3 How can web services middleware be engineered to have predictable execution
times?
Software engineering techniques, algorithms, designs and tools for incorporating
predictability into web services middleware.
4 How can performance models for such systems be derived and compared against
other techniques?
Queueing theory based performance model for a preemptive work conserving
M/G /1/./EDF queue.
55. Problem Overview Solutions Conclusion Contributions Outcomes Future Work Acknowledgements
Thesis Outcomes
V. Gamini Abhaya, Z. Tari, and P. Bertok. Achieving Predictability and Service Differentiation in Web
Services. In ICSOC-ServiceWave 09: Proceedings of the 7th International Conference on Service-Oriented
Computing, Stockholm, Sweden, November 24-27, 2009, pages 364-372. Springer, 2009.
V. Gamini Abhaya, Z. Tari, and P. Bertok. Using Real-Time Scheduling Principles in Web Service Clusters
to Achieve Predictability of Service Execution. In Service Oriented Computing: 8th International
Conference, ICSOC 2010, San Francisco, CA, USA, December 7-10, 2010. Proceedings, pages 197-212.
Springer, 2010.
V. Gamini Abhaya, Z. Tari, and P. Bertok. Building web services middleware with predictable service
execution. In Web Information Systems Engineering - WISE 2010: 11th International Conference, Hong
Kong, China, December 12-14, 2010, Proceedings, pages 23-37. Springer-Verlag New York Inc.
(* Won best student paper award)
V. Gamini Abhaya, Z. Tari, and P. Bertok. Building web services middleware with predictable execution
times. World Wide Web Journal, pages 1-60, 28 Mar 2012.
V. Gamini Abhaya, Z. Tari, P. Bertok. P. Zeephongsekul Waiting time analysis for multi-class
preemptive-resume M/G /1/./EDF queues. Journal of Parallel and Distributed Computing. (In Work)
56. Problem Overview Solutions Conclusion Contributions Outcomes Future Work Acknowledgements
Future Work
Predictability in the network layer
Extending predictability across application boundaries
Reducing request rejections through selective re-transmission
techniques
Improvements to the preemptive M/G /1/./EDF model
Performance models for preemptive G /G /1/./EDF queue
57. Problem Overview Solutions Conclusion Contributions Outcomes Future Work Acknowledgements
Acknowledgements
Prof. Zahir Tari and Assoc. Prof. Peter Bertok
Prof. Panlop Zeephongsekul
Miss. Dora Drakopoulos and Mrs. Beti Stojkovski
School of Computer Science and IT and its admin staff
Mr. Don Gingrich
DSN staff and students
My Family
58. Problem Overview Solutions Conclusion Contributions Outcomes Future Work Acknowledgements
Thank You !
&
Questions or Comments ?