This document discusses a policy-based architecture for quality of service (QoS) management. It describes different types of policies needed to specify QoS requirements and actions when requirements are not met. Expectation policies are used to state QoS requirements and monitoring actions. The architecture includes sensors to monitor application attributes, coordinators to retrieve policies and handle events, and policy decision points that make decisions based on events. When requirements are violated, actions may include notifying event managers or adjusting resources. The goal is to support dynamic and soft QoS requirements through policy-driven management.
SLA-DRIVEN LOAD SCHEDULING IN MULTI-TIER CLOUD COMPUTING: FINANCIAL IMPACT CO...ijccsa
A cloud service provider strives to effectively provide a high Quality of Service (QoS) to client jobs. Such
jobs vary in computational and Service-Level-Agreement (SLA) obligations, as well as differ with respect
to tolerating delays and SLA violations. The job scheduling plays a critical role in servicing cloud demands
by allocating appropriate resources to execute client jobs. The response to such jobs is optimized by the
cloud service provider on a multi-tier cloud computing environment. Typically, the complex and dynamic
nature of multi-tier environments incurs difficulties in meeting such demands, because tiers are dependent
on each others which in turn makes bottlenecks of a tier shift to escalate in subsequent tiers. However,
the optimization process of existing approaches produces single-tier-driven schedules that do not employ
the differential impact of SLA violations in executing client jobs. Furthermore, the impact of schedules
optimized at the tier level on the performance of schedules formulated in subsequent tiers tends to be
ignored, resulting in a less than optimal performance when measured at the multi-tier level. Thus, failing in
committing job obligations incurs SLA penalties that often take the form of either financial compensations,
or losing future interests and motivations of unsatisfied clients in the service provided. Therefore, tolerating
the risk of such delays on the operational performance of a cloud service provider is vital to meet SLA
expectations and mitigate their associated commercial penalties. Such situations demand the cloud service
provider to employ scalable service mechanisms that efficiently manage the execution of resource loads in
accordance to their financial influence on the system performance, so as to ensure system reliability and
cost reduction. In this paper, a scheduling and allocation approach is proposed to formulate schedules that
account for differential impacts of SLA violation penalties and, thus, produce schedules that are optimal in
financial performance. A queue virtualization scheme is designed to facilitate the formulation of optimal
schedules at the tier and multi-tier levels of the cloud environment. Because the scheduling problem is NPhard, a biologically inspired approach is proposed to mitigate the complexity of finding optimal schedules.
The reported results in this paper demonstrate the efficacy of the proposed approach in formulating costoptimal schedules that reduce SLA penalties of jobs at various architectural granularities of the multi-tier
cloud environment.
Service Request Scheduling based on Quantification Principle using Conjoint A...IJECEIAES
Service request scheduling has a major impact on the performance of the service processing design in a large-scale distributed computing environment like cloud systems. It is desirable to have a service request scheduling principle that evenly distributes the workload among the servers, according to their capacities. The capacities of the servers are termed high or low relative to one another. Therefore, there is a need to quantify the server capacity to overcome this subjective assessment. Subsequently, a method to split and distribute the service requests based on this quantified server capacity is also needed. The novelty of this research paper is to address these requirements by devising a service request scheduling principle for a heterogeneous distributed system using appropriate statistical methods, namely Conjoint analysis and Z-score. Suitable experiments were conducted and the experimental results show considerable improvement in the performance of the designed service request scheduling principle compared to a few other existing principles. Areas of further improvement have also been identified and presented.
Score based deadline constrained workflow scheduling algorithm for cloud systemsijccsa
Cloud Computing is the latest and emerging trend in information technology domain. It offers utility- based
IT services to user over the Internet. Workflow scheduling is one of the major problems in cloud systems. A
good scheduling algorithm must minimize the execution time and cost of workflow application along with
QoS requirements of the user. In this paper we consider deadline as the major constraint and propose a
score based deadline constrained workflow scheduling algorithm that executes workflow within
manageable cost while meeting user defined deadline constraint. The algorithm uses the concept of score
which represents the capabilities of hardware resources. This score value is used while allocating
resources to various tasks of workflow application. The algorithm allocates those resources to workflow
application which are reliable and reduce the execution cost and complete the workflow application within
user specified deadline. The experimental results show that score based algorithm exhibits less execution
time and also reduces the failure rate of workflow application within manageable cost. All the simulations
have been done using CloudSim toolkit.
AN ADAPTIVE APPROACH FOR DYNAMIC RECOVERY DECISIONS IN WEB SERVICE COMPOSITIO...ijwscjournal
Service Oriented Architecture facilitates automatic execution and composition of web services in
distributed environment. This service composition in the heterogeneous environment may suffer from
various kinds of service failures. These failures interrupt the execution of composite web services and
lead towards complete system failure. The dynamic recovery decisions of the failed services are
dependent on non-functional attributes of the services. In the recent years, various methodologies
have been presented to provide recovery decisions based on time related QoS (Quality of Service)
factors. These QoS attributes can be categorized further. Our paper categorized these attributes as
space and time. In this paper, we have proposed an affinity model to quantify the location affinity for
composition of web services. Furthermore, we have also suggested a replication mechanism and
algorithm for taking recovery decisions based on time and space based QoS parameters and usage
pattern of the services by the user.
An Efficient Distributed Control Law for Load Balancing in Content Delivery N...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
SLA-DRIVEN LOAD SCHEDULING IN MULTI-TIER CLOUD COMPUTING: FINANCIAL IMPACT CO...ijccsa
A cloud service provider strives to effectively provide a high Quality of Service (QoS) to client jobs. Such
jobs vary in computational and Service-Level-Agreement (SLA) obligations, as well as differ with respect
to tolerating delays and SLA violations. The job scheduling plays a critical role in servicing cloud demands
by allocating appropriate resources to execute client jobs. The response to such jobs is optimized by the
cloud service provider on a multi-tier cloud computing environment. Typically, the complex and dynamic
nature of multi-tier environments incurs difficulties in meeting such demands, because tiers are dependent
on each others which in turn makes bottlenecks of a tier shift to escalate in subsequent tiers. However,
the optimization process of existing approaches produces single-tier-driven schedules that do not employ
the differential impact of SLA violations in executing client jobs. Furthermore, the impact of schedules
optimized at the tier level on the performance of schedules formulated in subsequent tiers tends to be
ignored, resulting in a less than optimal performance when measured at the multi-tier level. Thus, failing in
committing job obligations incurs SLA penalties that often take the form of either financial compensations,
or losing future interests and motivations of unsatisfied clients in the service provided. Therefore, tolerating
the risk of such delays on the operational performance of a cloud service provider is vital to meet SLA
expectations and mitigate their associated commercial penalties. Such situations demand the cloud service
provider to employ scalable service mechanisms that efficiently manage the execution of resource loads in
accordance to their financial influence on the system performance, so as to ensure system reliability and
cost reduction. In this paper, a scheduling and allocation approach is proposed to formulate schedules that
account for differential impacts of SLA violation penalties and, thus, produce schedules that are optimal in
financial performance. A queue virtualization scheme is designed to facilitate the formulation of optimal
schedules at the tier and multi-tier levels of the cloud environment. Because the scheduling problem is NPhard, a biologically inspired approach is proposed to mitigate the complexity of finding optimal schedules.
The reported results in this paper demonstrate the efficacy of the proposed approach in formulating costoptimal schedules that reduce SLA penalties of jobs at various architectural granularities of the multi-tier
cloud environment.
Service Request Scheduling based on Quantification Principle using Conjoint A...IJECEIAES
Service request scheduling has a major impact on the performance of the service processing design in a large-scale distributed computing environment like cloud systems. It is desirable to have a service request scheduling principle that evenly distributes the workload among the servers, according to their capacities. The capacities of the servers are termed high or low relative to one another. Therefore, there is a need to quantify the server capacity to overcome this subjective assessment. Subsequently, a method to split and distribute the service requests based on this quantified server capacity is also needed. The novelty of this research paper is to address these requirements by devising a service request scheduling principle for a heterogeneous distributed system using appropriate statistical methods, namely Conjoint analysis and Z-score. Suitable experiments were conducted and the experimental results show considerable improvement in the performance of the designed service request scheduling principle compared to a few other existing principles. Areas of further improvement have also been identified and presented.
Score based deadline constrained workflow scheduling algorithm for cloud systemsijccsa
Cloud Computing is the latest and emerging trend in information technology domain. It offers utility- based
IT services to user over the Internet. Workflow scheduling is one of the major problems in cloud systems. A
good scheduling algorithm must minimize the execution time and cost of workflow application along with
QoS requirements of the user. In this paper we consider deadline as the major constraint and propose a
score based deadline constrained workflow scheduling algorithm that executes workflow within
manageable cost while meeting user defined deadline constraint. The algorithm uses the concept of score
which represents the capabilities of hardware resources. This score value is used while allocating
resources to various tasks of workflow application. The algorithm allocates those resources to workflow
application which are reliable and reduce the execution cost and complete the workflow application within
user specified deadline. The experimental results show that score based algorithm exhibits less execution
time and also reduces the failure rate of workflow application within manageable cost. All the simulations
have been done using CloudSim toolkit.
AN ADAPTIVE APPROACH FOR DYNAMIC RECOVERY DECISIONS IN WEB SERVICE COMPOSITIO...ijwscjournal
Service Oriented Architecture facilitates automatic execution and composition of web services in
distributed environment. This service composition in the heterogeneous environment may suffer from
various kinds of service failures. These failures interrupt the execution of composite web services and
lead towards complete system failure. The dynamic recovery decisions of the failed services are
dependent on non-functional attributes of the services. In the recent years, various methodologies
have been presented to provide recovery decisions based on time related QoS (Quality of Service)
factors. These QoS attributes can be categorized further. Our paper categorized these attributes as
space and time. In this paper, we have proposed an affinity model to quantify the location affinity for
composition of web services. Furthermore, we have also suggested a replication mechanism and
algorithm for taking recovery decisions based on time and space based QoS parameters and usage
pattern of the services by the user.
An Efficient Distributed Control Law for Load Balancing in Content Delivery N...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Regressive admission control enabled by real time qos measurementsIJCNCJournal
We propose a novel regressive principle to Admission Control (AC) assisted by real-time passive QoS
monitoring. This measurement-based AC scheme acceptsflows by default, but based on the changes in the
network QoS, it makes regressive decisions on the possible flow rejection, thus bringing cognition to the
network path. TheREgressive Admission Control (REAC) system consists of three modules performing the
necessary tasks:QoS measurements, traffic identification, and the actual AC decision making and flow
control. There are two major advantages with this new scheme; (i) significant optimization of the
connection start-up phase, and (ii) continuous QoS knowledge of the accepted streams. In fact, the latter
combined with the REAC decisions can enable guaranteed QoS without requiring any QoS support from
the network. REAC was tested on a video streaming test bed and proved to have a timely and realistic
match between the network's QoS and the video quality.
In this project, we proposed a framework to support heterogenous traffic with different QoS demand in
WiMAX. This framework dynamically changes the bandwidth allocation (BA) for ongoing and new arrival
connections based on network condition and service demand. The objective is to efficiently use the
available bandwidth and provide QoS support in a fair manner. Dynamic allocation of spectrum prior to
transmission can mitigate the starvation problem of Non Real time application. The WFQ based dynamic
bandwidth allocation framework uses architecture that has packet scheduler scheme (PS), call admission
policy and a dynamic bandwidth allocation mechanism. By the simulation result we have showed that this
architecture could provide QoS support by being fair to all classes of services.
NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...ijngnjournal
The quality of service is a need in recent computer network developments. The present paper evaluates some characteristics in a proposed network topology such as dropped packets and bandwidth use, using two traffic sources, firstly a VoIP source over an UDP agent, then a CBR traffic source over an UDP agent as well as the previous one. Two possible configurations are proposed, implementing both of them in the Network Simulator, and implementing in one of them differentiated services to compare the results. Statistics results are shown, in both cases showing the accumulative dropped packet number and the throughput in the link, obtaining a reducer number of dropped packets in the stage with differentiated services, and an improvement in the bandwidth use.
Wireless Networks Performance Monitoring Based on Passive-active Quality of S...IJCNCJournal
Monitoring of the performance of wireless network is of vital importance for both users and the service provider which should be accurate, simple and fast enough to reflect the network performance in a timely manner. The aim of this paper is to develop an approach which can infer the performance of wireless ad hoc networks based on Quality of service (QoS) parameters assessment. The developed method considers the QoS requirements of multimedia applications transmitted over these kind of networks. This approach is based on the ideas of combination of both active and passive measurement methods. This approach uses an in-service measurement method in which the QoS parameters of the actual application (user) are estimated by means of dedicated monitoring packets (probes). Afterwards, these parameters are combined to produce and assess the application’s overall QoS using the fuzzy logic assessment and based on the measured QoS parameters estimated using the probe traffic. The active scheme is used to generate monitoring probe packets which are inserted between blocks of target application packets at regular intervals. While the passive monitoring is utilized to act as a traffic meter which performs as a counter of user packets (and bytes) that belong to the application (user) traffic flow that is subjected to monitoring. After simulating the developed technique, it offered a good estimation for the delay, throughput, packet losses and the overall QoS when using different probe rates.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Comp...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
A Novel Approach for Efficient Resource Utilization and Trustworthy Web ServiceCSCJournals
Many Web services are expected to run with high degree of security and dependability. To achieve this goal, it is essential to use a Web-services compatible framework that tolerates not only crash faults, but Byzantine faults as well, due to the untrusted communication environment in which the Web services operate. In this paper, we describe the design and implementation of such a framework, called RET-WS (Resource Efficient and Trustworthy Execution -Web Service).RET-WS is designed to operate on top of the standard SOAP messaging framework for maximum interoperability with resource efficient way to execute requests in Byzantine-fault-tolerant replication that is particularly well suited for services in which request processing is resource-intensive. Previous efforts took a failure masking all-active approach of using all execution replicas to execute all requests; at least 2t + 1 execution replicas are needed to mask t Byzantine-faulty ones. We describe an asynchronous protocol that provides resource-efficient execution by combining failure masking with imperfect failure detection and checkpointing. It is implemented as a pluggable module within the Axis2 architecture, as such, it requires minimum changes to the Web applications. The core fault tolerance mechanisms used in RET-WS are based on the well-known Castro and Liskov's BFT algorithm for optimal efficiency with some modification for resource efficient way. Our performance measurements confirm that RET-WS incurs only moderate runtime overhead considering the complexity of the mechanisms.
AWSQ: an approximated web server queuing algorithm for heterogeneous web serv...IJECEIAES
With the rising popularity of web-based applications, the primary and consistent resource in the infrastructure of World Wide Web are cluster-based web servers. Overtly in dynamic contents and database driven applications, especially at heavy load circumstances, the performance handling of clusters is a solemn task. Without using efficient mechanisms, an overloaded web server cannot provide great performance. In clusters, this overloaded condition can be avoided using load balancing mechanisms by sharing the load among available web servers. The existing load balancing mechanisms which were intended to handle static contents will grieve from substantial performance deprivation under database-driven and dynamic contents. The most serviceable load balancing approaches are Web Server Queuing (WSQ), Server Content based Queue (QSC) and Remaining Capacity (RC) under specific conditions to provide better results. By Considering this, we have proposed an approximated web server Queuing mechanism for web server clusters and also proposed an analytical model for calculating the load of a web server. The requests are classified based on the service time and keep tracking the number of outstanding requests at each webserver to achieve better performance. The approximated load of each web server is used for load balancing. The investigational results illustrate the effectiveness of the proposed mechanism by improving the mean response time, throughput and drop rate of the server cluster.
ENHANCING AND MEASURING THE PERFORMANCE IN SOFTWARE DEFINED NETWORKINGIJCNCJournal
Software Defined Networking (SDN) is a challenging chapter in today’s networking era. It is a network design approach that engages the framework to be controlled or 'altered' adroitly and halfway using programming applications. SDN is a serious advancement that assures to provide a better strategy than displaying the Quality of Service (QoS) approach in the present correspondence frameworks. SDN etymologically changes the lead and convenience of system instruments using the single high state program. It separates the system control and sending functions, empowering the network control to end up specifically. It provides more functionality and more flexibility than the traditional networks. A network administrator can easily shape the traffic without touching any individual switches and services which are needed in a network. The main technology for implementing SDN is a separation of data plane and control plane, network virtualization through programmability. The total amount of time in which user can respond is called response time. Throughput is known as how fast a network can send data. In this paper, we have design a network through which we have measured the Response Time and Throughput comparing with the Real-time Online Interactive Applications (ROIA), Multiple Packet Scheduler, and NOX.
On the quality of service of crash recoveryingenioustech
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
Secure and Proficient Cross Layer (SPCL) QoS Framework for Mobile Ad-hocIJECEIAES
A cross layer QoS framework is a complete system that provides required QoS services to each node present in the network. All components within it cooperate together for providing the required services. In existing QoS frameworks there is no security mechanism provided while Security is a critical aspect for QoS in the MANET environment. Cross layer QoS framework tend to be vulnerable to a number of threats and attacks like, over/under-reporting of available bandwidth, over-reservation, state table starvation, QoS degradation, information disclosure, theft of services timing attack, flooding attack, replay attack, and denial of service (DoS) attack, attacks on information in transit and attacks against routing. So it is necessary when designing protocols for QoS framework, the harmony between security and QoS must be present as one impacts the others. In this work we proposed secure and proficient cross layer (SPCL) QoS frameworks which prevents from various types of threats and attacks. The proposed SPCL QoS framework achieves better performance compared to existing QoS frameworks in metrics of throughput, packet drop ratio, end-to-end delay, and average jitter in both condition when malicious node present in the network and when malicious node not present in the network.
A TWO-LEVELED WEB SERVICE PATH RE-PLANNING TECHNIQUEijwscjournal
Web service paths can accomplish customer requirements. During the execution of a web service path, violations of service level agreements (SLAs) will cause the path to be re-planed (healed). Existing re-planning techniques generally suffer from shortcomings of ignoring the effect of requirement change.
This paper proposes a two-leveled path re-planning technique (TLPRP), which offers the following features: (a) TLPRP is composed of both meta and physical levels. The meta level re-planning senses environment parameters (e.g., requirement change and analysis/design errors) and re-plan the affected component paths produced by system design. (b) The physical level re-planning algorithm possesses the ability of web
service path composition. (c) The re-planning algorithm embeds an access control policy that computes a successful possibility for every path to facilitate avoiding execution failure caused by web service access failure.
Functionality Based Fixing of Weight for Non-functional parameters of a web s...Editor IJCATR
In recent years, web services open the way to global business development through B2B integration and they used with
different business applications to accommodate their services. So while selecting a service, the first preference is given to the
functional properties of the web service and then a best web service is selected from the list by considering the expected quality. The
QoS of a web service is also depends on the variant nonfunctional parameters of its service which may subject to changes because of
network and other related factors. While considering the performance of a service, the overall quality is always preferred but the actual
functionality of the service is based on the weight fixed for the non functional parameters like response time, throughput, etc... So
assigning functionality based weight for the non functional parameters are used to achieve the assured quality of the service. In this
paper we are proposing the system that gives importance for the non functional parameters such as response time, throughput,
availability, success ability, reliability for which the weight is set based on functionality and requirement of aparticular business
application.
Traffic-aware adaptive server load balancing for softwaredefined networks IJECEIAES
Servers in data center networks handle heterogeneous bulk loads. Load balancing, therefore, plays an important role in optimizing network bandwidth and minimizing response time. A complete knowledge of the current network status is needed to provide a stable load in the network. The process of network status catalog in a traditional network needs additional processing which increases complexity, whereas, in software defined networking, the control plane monitors the overall working of the network continuously. Hence it is decided to propose an efficient load balancing algorithm that adapts SDN. This paper proposes an efficient algorithm TAASLB-traffic-aware adaptive server load balancing to balance the flows to the servers in a data center network. It works based on two parameters, residual bandwidth, and server capacity. It detects the elephant flows and forwards them towards the optimal server where it can be processed quickly. It has been tested with the Mininet simulator and gave considerably better results compared to the existing server load balancing algorithms in the floodlight controller. After experimentation and analysis, it is understood that the method provides comparatively better results than the existing load balancing algorithms.
QOS WITH RELIABILITY AND SCALABILITY IN ADAPTIVE SERVICE-BASED SYSTEMScscpconf
Service-based systems that are dynamically composed at runtime to provide complex, adaptive
functionality are currently one of the main development paradigms in software engineering.
However, the Quality of Service (QoS) delivered by these systems remains an important
concern, and needs to be managed in an equally adaptive and predictable way. To address this
need, we introduce a novel, tool-supported framework for the development of adaptive servicebased
systems called QoSMOS (QoS Management and Optimization of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment, and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analyzed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.
FUZZY-BASED ARCHITECTURE TO IMPLEMENT SERVICE SELECTION ADAPTATION STRATEGYijwscjournal
One of the main requirements in service based applications is runtime adaptation to changes that occur in
business, user, environment, and computational contexts. Changes in contexts lead to QOS degrade.
Continues adaptation mechanism and strategies are required to stay service based applications(SBA) in
safe state. In this paper a framework for runtime adaptation in service based application isintroduced. It
checks user requirements change continuously and dynamically adopts architecture model. Also it checks
providers QOS attributes continuously and if adaptation requirement is triggered, runs service selection
adaptation strategy to satisfy user preferences. Thusit is a context aware and automatically adaptable
framework for SBA applications. Wehave implemented a fuzzy based system for web service selection unit.
Due to ambiguity of context’s data and cross-cutting effects of quality of services, using fuzzy would result
an optimised decision. Finally we illustrated that using of it has a good performance for web service based
applications.
FUZZY-BASED ARCHITECTURE TO IMPLEMENT SERVICE SELECTION ADAPTATION STRATEGYijwscjournal
One of the main requirements in service based applications is runtime adaptation to changes that occur in business, user, environment, and computational contexts. Changes in contexts lead to QOS degrade. Continues adaptation mechanism and strategies are required to stay service based applications(SBA) in safe state. In this paper a framework for runtime adaptation in service based application isintroduced. It checks user requirements change continuously and dynamically adopts architecture model. Also it checks providers QOS attributes continuously and if adaptation requirement is triggered, runs service selection adaptation strategy to satisfy user preferences. Thusit is a context aware and automatically adaptable
framework for SBA applications. Wehave implemented a fuzzy based system for web service selection unit. Due to ambiguity of context’s data and cross-cutting effects of quality of services, using fuzzy would result an optimised decision. Finally we illustrated that using of it has a good performance for web service based applications.
Regressive admission control enabled by real time qos measurementsIJCNCJournal
We propose a novel regressive principle to Admission Control (AC) assisted by real-time passive QoS
monitoring. This measurement-based AC scheme acceptsflows by default, but based on the changes in the
network QoS, it makes regressive decisions on the possible flow rejection, thus bringing cognition to the
network path. TheREgressive Admission Control (REAC) system consists of three modules performing the
necessary tasks:QoS measurements, traffic identification, and the actual AC decision making and flow
control. There are two major advantages with this new scheme; (i) significant optimization of the
connection start-up phase, and (ii) continuous QoS knowledge of the accepted streams. In fact, the latter
combined with the REAC decisions can enable guaranteed QoS without requiring any QoS support from
the network. REAC was tested on a video streaming test bed and proved to have a timely and realistic
match between the network's QoS and the video quality.
In this project, we proposed a framework to support heterogenous traffic with different QoS demand in
WiMAX. This framework dynamically changes the bandwidth allocation (BA) for ongoing and new arrival
connections based on network condition and service demand. The objective is to efficiently use the
available bandwidth and provide QoS support in a fair manner. Dynamic allocation of spectrum prior to
transmission can mitigate the starvation problem of Non Real time application. The WFQ based dynamic
bandwidth allocation framework uses architecture that has packet scheduler scheme (PS), call admission
policy and a dynamic bandwidth allocation mechanism. By the simulation result we have showed that this
architecture could provide QoS support by being fair to all classes of services.
NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...ijngnjournal
The quality of service is a need in recent computer network developments. The present paper evaluates some characteristics in a proposed network topology such as dropped packets and bandwidth use, using two traffic sources, firstly a VoIP source over an UDP agent, then a CBR traffic source over an UDP agent as well as the previous one. Two possible configurations are proposed, implementing both of them in the Network Simulator, and implementing in one of them differentiated services to compare the results. Statistics results are shown, in both cases showing the accumulative dropped packet number and the throughput in the link, obtaining a reducer number of dropped packets in the stage with differentiated services, and an improvement in the bandwidth use.
Wireless Networks Performance Monitoring Based on Passive-active Quality of S...IJCNCJournal
Monitoring of the performance of wireless network is of vital importance for both users and the service provider which should be accurate, simple and fast enough to reflect the network performance in a timely manner. The aim of this paper is to develop an approach which can infer the performance of wireless ad hoc networks based on Quality of service (QoS) parameters assessment. The developed method considers the QoS requirements of multimedia applications transmitted over these kind of networks. This approach is based on the ideas of combination of both active and passive measurement methods. This approach uses an in-service measurement method in which the QoS parameters of the actual application (user) are estimated by means of dedicated monitoring packets (probes). Afterwards, these parameters are combined to produce and assess the application’s overall QoS using the fuzzy logic assessment and based on the measured QoS parameters estimated using the probe traffic. The active scheme is used to generate monitoring probe packets which are inserted between blocks of target application packets at regular intervals. While the passive monitoring is utilized to act as a traffic meter which performs as a counter of user packets (and bytes) that belong to the application (user) traffic flow that is subjected to monitoring. After simulating the developed technique, it offered a good estimation for the delay, throughput, packet losses and the overall QoS when using different probe rates.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Comp...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
A Novel Approach for Efficient Resource Utilization and Trustworthy Web ServiceCSCJournals
Many Web services are expected to run with high degree of security and dependability. To achieve this goal, it is essential to use a Web-services compatible framework that tolerates not only crash faults, but Byzantine faults as well, due to the untrusted communication environment in which the Web services operate. In this paper, we describe the design and implementation of such a framework, called RET-WS (Resource Efficient and Trustworthy Execution -Web Service).RET-WS is designed to operate on top of the standard SOAP messaging framework for maximum interoperability with resource efficient way to execute requests in Byzantine-fault-tolerant replication that is particularly well suited for services in which request processing is resource-intensive. Previous efforts took a failure masking all-active approach of using all execution replicas to execute all requests; at least 2t + 1 execution replicas are needed to mask t Byzantine-faulty ones. We describe an asynchronous protocol that provides resource-efficient execution by combining failure masking with imperfect failure detection and checkpointing. It is implemented as a pluggable module within the Axis2 architecture, as such, it requires minimum changes to the Web applications. The core fault tolerance mechanisms used in RET-WS are based on the well-known Castro and Liskov's BFT algorithm for optimal efficiency with some modification for resource efficient way. Our performance measurements confirm that RET-WS incurs only moderate runtime overhead considering the complexity of the mechanisms.
AWSQ: an approximated web server queuing algorithm for heterogeneous web serv...IJECEIAES
With the rising popularity of web-based applications, the primary and consistent resource in the infrastructure of World Wide Web are cluster-based web servers. Overtly in dynamic contents and database driven applications, especially at heavy load circumstances, the performance handling of clusters is a solemn task. Without using efficient mechanisms, an overloaded web server cannot provide great performance. In clusters, this overloaded condition can be avoided using load balancing mechanisms by sharing the load among available web servers. The existing load balancing mechanisms which were intended to handle static contents will grieve from substantial performance deprivation under database-driven and dynamic contents. The most serviceable load balancing approaches are Web Server Queuing (WSQ), Server Content based Queue (QSC) and Remaining Capacity (RC) under specific conditions to provide better results. By Considering this, we have proposed an approximated web server Queuing mechanism for web server clusters and also proposed an analytical model for calculating the load of a web server. The requests are classified based on the service time and keep tracking the number of outstanding requests at each webserver to achieve better performance. The approximated load of each web server is used for load balancing. The investigational results illustrate the effectiveness of the proposed mechanism by improving the mean response time, throughput and drop rate of the server cluster.
ENHANCING AND MEASURING THE PERFORMANCE IN SOFTWARE DEFINED NETWORKINGIJCNCJournal
Software Defined Networking (SDN) is a challenging chapter in today’s networking era. It is a network design approach that engages the framework to be controlled or 'altered' adroitly and halfway using programming applications. SDN is a serious advancement that assures to provide a better strategy than displaying the Quality of Service (QoS) approach in the present correspondence frameworks. SDN etymologically changes the lead and convenience of system instruments using the single high state program. It separates the system control and sending functions, empowering the network control to end up specifically. It provides more functionality and more flexibility than the traditional networks. A network administrator can easily shape the traffic without touching any individual switches and services which are needed in a network. The main technology for implementing SDN is a separation of data plane and control plane, network virtualization through programmability. The total amount of time in which user can respond is called response time. Throughput is known as how fast a network can send data. In this paper, we have design a network through which we have measured the Response Time and Throughput comparing with the Real-time Online Interactive Applications (ROIA), Multiple Packet Scheduler, and NOX.
On the quality of service of crash recoveryingenioustech
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
Secure and Proficient Cross Layer (SPCL) QoS Framework for Mobile Ad-hocIJECEIAES
A cross layer QoS framework is a complete system that provides required QoS services to each node present in the network. All components within it cooperate together for providing the required services. In existing QoS frameworks there is no security mechanism provided while Security is a critical aspect for QoS in the MANET environment. Cross layer QoS framework tend to be vulnerable to a number of threats and attacks like, over/under-reporting of available bandwidth, over-reservation, state table starvation, QoS degradation, information disclosure, theft of services timing attack, flooding attack, replay attack, and denial of service (DoS) attack, attacks on information in transit and attacks against routing. So it is necessary when designing protocols for QoS framework, the harmony between security and QoS must be present as one impacts the others. In this work we proposed secure and proficient cross layer (SPCL) QoS frameworks which prevents from various types of threats and attacks. The proposed SPCL QoS framework achieves better performance compared to existing QoS frameworks in metrics of throughput, packet drop ratio, end-to-end delay, and average jitter in both condition when malicious node present in the network and when malicious node not present in the network.
A TWO-LEVELED WEB SERVICE PATH RE-PLANNING TECHNIQUEijwscjournal
Web service paths can accomplish customer requirements. During the execution of a web service path, violations of service level agreements (SLAs) will cause the path to be re-planed (healed). Existing re-planning techniques generally suffer from shortcomings of ignoring the effect of requirement change.
This paper proposes a two-leveled path re-planning technique (TLPRP), which offers the following features: (a) TLPRP is composed of both meta and physical levels. The meta level re-planning senses environment parameters (e.g., requirement change and analysis/design errors) and re-plan the affected component paths produced by system design. (b) The physical level re-planning algorithm possesses the ability of web
service path composition. (c) The re-planning algorithm embeds an access control policy that computes a successful possibility for every path to facilitate avoiding execution failure caused by web service access failure.
Functionality Based Fixing of Weight for Non-functional parameters of a web s...Editor IJCATR
In recent years, web services open the way to global business development through B2B integration and they used with
different business applications to accommodate their services. So while selecting a service, the first preference is given to the
functional properties of the web service and then a best web service is selected from the list by considering the expected quality. The
QoS of a web service is also depends on the variant nonfunctional parameters of its service which may subject to changes because of
network and other related factors. While considering the performance of a service, the overall quality is always preferred but the actual
functionality of the service is based on the weight fixed for the non functional parameters like response time, throughput, etc... So
assigning functionality based weight for the non functional parameters are used to achieve the assured quality of the service. In this
paper we are proposing the system that gives importance for the non functional parameters such as response time, throughput,
availability, success ability, reliability for which the weight is set based on functionality and requirement of aparticular business
application.
Traffic-aware adaptive server load balancing for softwaredefined networks IJECEIAES
Servers in data center networks handle heterogeneous bulk loads. Load balancing, therefore, plays an important role in optimizing network bandwidth and minimizing response time. A complete knowledge of the current network status is needed to provide a stable load in the network. The process of network status catalog in a traditional network needs additional processing which increases complexity, whereas, in software defined networking, the control plane monitors the overall working of the network continuously. Hence it is decided to propose an efficient load balancing algorithm that adapts SDN. This paper proposes an efficient algorithm TAASLB-traffic-aware adaptive server load balancing to balance the flows to the servers in a data center network. It works based on two parameters, residual bandwidth, and server capacity. It detects the elephant flows and forwards them towards the optimal server where it can be processed quickly. It has been tested with the Mininet simulator and gave considerably better results compared to the existing server load balancing algorithms in the floodlight controller. After experimentation and analysis, it is understood that the method provides comparatively better results than the existing load balancing algorithms.
QOS WITH RELIABILITY AND SCALABILITY IN ADAPTIVE SERVICE-BASED SYSTEMScscpconf
Service-based systems that are dynamically composed at runtime to provide complex, adaptive
functionality are currently one of the main development paradigms in software engineering.
However, the Quality of Service (QoS) delivered by these systems remains an important
concern, and needs to be managed in an equally adaptive and predictable way. To address this
need, we introduce a novel, tool-supported framework for the development of adaptive servicebased
systems called QoSMOS (QoS Management and Optimization of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment, and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analyzed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.
FUZZY-BASED ARCHITECTURE TO IMPLEMENT SERVICE SELECTION ADAPTATION STRATEGYijwscjournal
One of the main requirements in service based applications is runtime adaptation to changes that occur in
business, user, environment, and computational contexts. Changes in contexts lead to QOS degrade.
Continues adaptation mechanism and strategies are required to stay service based applications(SBA) in
safe state. In this paper a framework for runtime adaptation in service based application isintroduced. It
checks user requirements change continuously and dynamically adopts architecture model. Also it checks
providers QOS attributes continuously and if adaptation requirement is triggered, runs service selection
adaptation strategy to satisfy user preferences. Thusit is a context aware and automatically adaptable
framework for SBA applications. Wehave implemented a fuzzy based system for web service selection unit.
Due to ambiguity of context’s data and cross-cutting effects of quality of services, using fuzzy would result
an optimised decision. Finally we illustrated that using of it has a good performance for web service based
applications.
FUZZY-BASED ARCHITECTURE TO IMPLEMENT SERVICE SELECTION ADAPTATION STRATEGYijwscjournal
One of the main requirements in service based applications is runtime adaptation to changes that occur in business, user, environment, and computational contexts. Changes in contexts lead to QOS degrade. Continues adaptation mechanism and strategies are required to stay service based applications(SBA) in safe state. In this paper a framework for runtime adaptation in service based application isintroduced. It checks user requirements change continuously and dynamically adopts architecture model. Also it checks providers QOS attributes continuously and if adaptation requirement is triggered, runs service selection adaptation strategy to satisfy user preferences. Thusit is a context aware and automatically adaptable
framework for SBA applications. Wehave implemented a fuzzy based system for web service selection unit. Due to ambiguity of context’s data and cross-cutting effects of quality of services, using fuzzy would result an optimised decision. Finally we illustrated that using of it has a good performance for web service based applications.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of cloud applications and analyse the impact of resource management and
scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of cloud applications and analyse the impact of resource management and
scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying applications depending on user requirements. Applications of cloud have diverse composition, configuration and deployment requirements. Quantifying the performance of applications in Cloud computing environments is a challenging task. In this paper, we try to identify various parameters associated with performance of cloud applications and analyse the impact of resource management and scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of
AN ADAPTIVE APPROACH FOR DYNAMIC RECOVERY DECISIONS IN WEB SERVICE COMPOSITIO...ijwscjournal
Service Oriented Architecture facilitates automatic execution and composition of web services in distributed environment. This service composition in the heterogeneous environment may suffer from various kinds of service failures. These failures interrupt the execution of composite web services and lead towards complete system failure. The dynamic recovery decisions of the failed services are dependent on non-functional attributes of the services. In the recent years, various methodologies have been presented to provide recovery decisions based on time related QoS (Quality of Service) factors. These QoS attributes can be categorized further. Our paper categorized these attributes as space and time. In this paper, we have proposed an affinity model to quantify the location affinity for composition of web services. Furthermore, we have also suggested a replication mechanism and algorithm for taking recovery decisions based on time and space based QoS parameters and usage pattern of the services by the user.
WEB SERVICE COMPOSITION IN DYNAMIC ENVIRONMENT: A COMPARATIVE STUDYcscpconf
Web service composition development is a complex and dynamic process. It is one of the challenges in distributed dynamic environments. Although, SOA (Service Oriented Architecture) facilitates service composition process through standard protocols in searching and binding with web services. Yet composition in SOA paradigm faces many challenges. One of the main challenges is the environment in which composition services are developed. Nowadays the environment becomes more dynamic due to the increase in the number of web services that are frequently changing. Therefore, the need for self-adapted composition methods that acts according to environment changes is advocated. In this paper, we will study the existe researches that address the web service composition in a dynamic environment to state the art in this area and assist future research.
Proactive Scheduling in Cloud ComputingjournalBEEI
Autonomic fault aware scheduling is a feature quite important for cloud computing and it is related to adoption of workload variation. In this context, this paper proposes an fault aware pattern matching autonomic scheduling for cloud computing based on autonomic computing concepts. In order to validate the proposed solution, we performed two experiments one with traditional approach and other other with pattern recognition fault aware approach. The results show the effectiveness of the scheme.
Cloud computing is the fastest emerging technology and a novel buzzword in the field of IT domain that offer distinct services, applications and focuses on providing sustainable, reliable, scalable and virtualized resources to its consumer. The main aim of cloud computing is to enhance the use of distributed resources to achieve higher throughput and resource utilization in large-scale computation problems. Scheduling affects the efficiency of cloud and plays a significant role in cloud computing to create high performance environment. The Quality of Service (QoS) requirements of user application define the scheduling of resources. Numbers of researchers have tried to solve these scheduling problems using different QoS based scheduling techniques. In this paper, a detail analysis of resource scheduling methodology is presented, with different types of scheduling based on soft computing techniques, their comparisons, benefits and results are discussed. Major finding of this paper helps researchers to decide suitable approach for scheduling user’s applications considering their QoS requirements.
AVAILABILITY METRICS: UNDER CONTROLLED ENVIRONMENTS FOR WEB SERVICES ijwscjournal
Web Services technology has the potential to cater an enterprise’s needs, providing the ability to integrate
different systems and application types regardless of their platform, operating system, programming
language, or location. Web Services could be deployed in traditional, centralized or brokered client server
approach or they could be in peer to peer manner. Web Services could act as a server providing
functionality to a requester or act as a client, receiving functionality from any service. From the
performance perspective the availability of Web Services plays an important role among many parameters.
AVAILABILITY METRICS: UNDER CONTROLLED ENVIRONMENTS FOR WEB SERVICES ijwscjournal
Web Services technology has the potential to cater an enterprise’s needs, providing the ability to integrate different systems and application types regardless of their platform, operating system, programming language, or location. Web Services could be deployed in traditional, centralized or brokered client server approach or they could be in peer to peer manner. Web Services could act as a server providing functionality to a requester or act as a client, receiving functionality from any service. From the performance perspective the availability of Web Services plays an important role among many parameters.
AVAILABILITY METRICS: UNDER CONTROLLED ENVIRONMENTS FOR WEB SERVICES ijwscjournal
Web Services technology has the potential to cater an enterprise’s needs, providing the ability to integrate different systems and application types regardless of their platform, operating system, programming language, or location. Web Services could be deployed in traditional, centralized or brokered client server approach or they could be in peer to peer manner. Web Services could act as a server providing functionality to a requester or act as a client, receiving functionality from any service. From the performance perspective the availability of Web Services plays an important role among many parameters.
SERVICE ORIENTED QUALITY REQUIREMENT FRAMEWORK FOR CLOUD COMPUTINGijcsit
This research paper introduces a framework to identify the quality requirements of cloud computing services. It considered two dominant sub-layers; functional layer and runtime layer against cloud characteristics. SERVQUAL model attributes and the opinions of the industry experts were used to derive the quality constructs in cloud computing environment. The framework gives proper identification of cloud computing service quality expectations of users. The validity of the framework was evaluated by using
questionnaire based survey. Partial least squares-structural equation modelling (PLS-SEM) technique was
used to evaluate the outcome. The research findings shows that the significance of functional layer is
higher than runtime layer and prioritized quality factors of two layers are Service time, Information and
data security, Recoverability, Service Transparency, and Accessibility.
SERVICE ORIENTED QUALITY REQUIREMENT FRAMEWORK FOR CLOUD COMPUTING
IEEE publication on QoS
1. POLICY SPECIFICATION AND ARCHITECTURE FOR
QUALITY OF SERVICE MANAGEMENT
Nathan Muruganantha and Hanan Lutfiyya
Department of Computer Science
The University of Western Ontario
hanan@csd.uwo.ca
Abstract: An application’s Quality of Service (QoS) requirements refers to non-functional, run-time
requirements. These requirements are usually soft in that the application is functionally
correct even if the QoS requirement is not satisfied at run-time. QoS requirements are
dynamic in that for a specific application, they change. The ability to satisfy an applica-
tion’s QoS requirement depends on the available resources. Since an application may have
different QoS requirements in different sessions, the resources needed are different. A dif-
ferentiated service must be supported. Since an application’s QoS requirement is soft, it
may not always be satisfied. It must be possible to dynamically allocate more resources. In
an overloaded situation, it may be necessary to allocate resources to an application at the
expense of other applications. Policies are used to express QoS requirements and actions
to be taken when the QoS requirement is not satisfied. Policies are also used to specify ac-
tions to be taken in overloaded situations. Policies dynamically change. Supporting these
policies is done through a set of distributed managed processes. It must be possible spec-
ify policies and have these policies distributed to managed processes. This paper describes
how these policies can be formally specified and a management architecture (based on the
IETF framework) that describes how the policies are distributed and used by the manage-
ment system. We conclude with a discussion of our experiences with the management
system developed.
Keywords: QoS Management, Policies, Application Management
1. Introduction
An application’s Quality of Service (QoS) refers to non-functional, run-time re-
quirements. A possible QoS requirement for an application receiving a video stream
is the following: “The number of video frames per second displayed must be 25 plus
or minus 2 frames". A QoS requirement is soft for an application, if the application
is functionally correct even though the QoS requirement is not satisfied at run-time.
Otherwise, the QoS requirement is said to be hard (e.g., flight control systems, patient
monitoring systems).
The allocation and scheduling of computing resources is referred to as QoS man-
agement. QoS management techniques (e.g., [25]), such as resource reservation and
admission control can be used to guarantee QoS requirements, since resource reserva-
¡ This work is supported by the National Sciences and Engineering Research Council (NSERC) of Canada,
the IBM Centre of Advanced Studies in Toronto, Canada and Canadian Institute of Telecommunications
Research (CITR).
3. Policy Specification and Architecture for Quality of Service Management
Original
application
process
sensor
sensor
PDP
Coordinator
PEP
Policy
Manager
Name
Server
Event
Manager
Resource
Manager
PEPPDP
Resource
Manager
devices
devices
FrameworkComponents
ApplicationProcess/Platformdependent
Components
Policy Editor
event
event
policy
policy
action request
resource adjustment!
request
Figure 1. Policy-Based QoS Management Architecture
2.1 Expectation Policies
An expectation policy is used for stating how QoS requirements are determined
and where to report the violation of a QoS requirement and any other possible actions.
Expectation policies are specified using Ponder obligation policy formalism (for more
information see [3]). In this formalism, the policy specifies the action that a subject
must perform on a set of target objects when an event occurs. An expectation policy
type associated with Example 1 is the following:
5. Policy Specification and Architecture for Quality of Service Management
2.2 Monitoring
Example 2 illustrates that application QoS requirements are defined in terms of
application-specific attributes. Monitoring of the application is needed to collect val-
ues of the attributes (e.g., current fps). One monitoring mechanism is through instru-
mentation of the application which is briefly described here.
An attribute is associated with a sensor (see Figure 1). A sensor is a class with
variables for representing threshold and target values. Sensors are used to collect,
maintain, and process a wide variety of attribute information within the instrumented
processes. The sensor’s methods (probes) are used to initialise sensors with threshold
and target values and collect values of attributes. Probes are embedded into process
code to facilitate interactions with sensors.
As an example, consider the sample psuedo-code for a video playback application
in Figure 2 that has the QoS requirement stated in Example 1 and sensor i‰j . This
QoS requirement suggests an upper threshold of 27 frames per second and a lower
threshold of 23 frames per second. Sensor iˆj includes probes such as the following:
(1) An initialisation probe (line 3 of Figure 2) that takes as a parameter the default
threshold target value, and the lower and upper bounds. When the coordinator was
instantiated (line 2 of Figure 2) it communicated with the QoS management system
to get the application’s QoS requirements in the form of the AttributeConditionList.
Thus when the sensor requests this information, the coordinator will already have
retrieved it. (2) A probe that (i) determines the elapsed time between frames and
checks to see if this time falls within a particular range defined by the lower and upper
acceptable thresholds. Unusual spikes are filtered out; and (ii) informs the coordinator
if the frames per second fall below the lower threshold or is higher then the upper
threshold.
When the coordinator retrieved the QoS requirements, it also retrieved the action
list consisting of the actions in the form of the ActionList to be taken if the QoS
requirement is not satisfied. In this example, this action is to notify the Event Manager
process of the QoS management system whose handle is EventManagerHandle. The
coordinator generates an event whose identifier is the event identifier found in the
received action list.
We note that not all sensors measure attributes that are directly used in the spec-
ification of a QoS requirement. For example, a sensor may be used to measure the
current size of the communications buffer.
2.3 QoS Management System
In the IETF framework, a Policy Decision Point (PDP) is used to retrieve stored
policies (which is stored in the repository), and interpret and validate them. PDPs
also make decisions on actions to be taken based on the receipt of an event which is
generated from the monitoring of the environment. A Policy Enforcement Point (PEP)
applies these actions. In this section, we will describe how PDPs and PEPs are used
in this work and the additional components needed to support the QoS management
strategy described in Section 1.
Name Server. When an applications starts up, it instantiates its coordinator. The coor-
dinator then registers with the Name Server. The Name Server receives and maintains
in a repository registration data from other components and application processes. It
6. Nathan Muruganantha, Hanan Lutfiyya
Given: Video application v.
QOS expectations e.
1. Perform initialization for v.
2. Initialise coordinator c.
3. Execute i‰j¦k init probe(e)
4. while (v not done) do:
5. Retrieve next video frame f.
6. Decode and display f.
7. Execute iˆjlk probe framerate()
8. endwhile
Figure 2. Instrumentation Example
assigns a unique instance identifier for each registered process. The Name Server
coordinates the interaction between the QoS management components and the appli-
cation’s coordinator.
Policy Decision Points (Policy Manager). Within an administrative domain, there is
one Policy Decision Point (PDP) that communicates with a repository where policies
are stored and information about users is kept. This PDP is referred to as a Policy
Manager. The Policy Manager is responsible for retrieving and mapping high-level
policies to rules and communicating the rules to the appropriate management entities.
The need for this should become clearer later in this section. The Policy Manager
keeps track of where policies were deployed to.
After the application has registered with the Name Server, it requests its QoS re-
quirements from the Policy Manager. It does this by generating an event requestQoS-
Requirements(ProcessInfo). Upon receiving the request, the Policy Manager deter-
mines the policy that applies. If the policy that applies is Example 2 then the Policy
Manager will execute the script DetermineAllowedFrameRate. The Policy Man-
ager sends the QoS requirements and actions to be taken in the case of QoS violation
to the application’s coordinator as a condition list and action list which were described
earlier.
The Policy Manager is the only type of PDP to receive events directly. The rea-
son for this is the assumption that within an administrative domain there is only one
Policy Manager and that the only type of event that it can receive is the requestQoS-
Requirements event.
Event Manager. The Event Manager receives events and allows other management
entities to register an interest in an event. The Event Manager can receive events from
the coordinators that are generated in response to a violation of QoS requirements and
it can receive events that represent a regular report of monitored data e.g., resource
usage information.
Policy Decision Points for Violation Location. In Section 1, we identified the need
for violation location. In this work, the location process is event-driven in the sense
that the location process does not begin until an event indicating that a QoS require-
10. Nathan Muruganantha, Hanan Lutfiyya
in Java. The rule engine used by the PDP is JESS. The PEP has two components. The
first component, implemented in Java, interacts with PDPs. The second component
has specific resource managers such as the CPU manager and the Memory Manager.
These resource managers actually manipulate the resources. This is written in C and
is specific for Solaris. Currently, the CPU manager is integrated into the prototype.
The Policy Editor has a GUI that allows the user to enter policies The GUI was
designed so that the user could be guided through the development of the policy by
selecting whether they want to define event names, events, actions, attribute conditions
(through the constraints). The policies were put into XML and references to the XML
file were placed in the LDAP directory. XML was the language syntax used to express
the Ponder policies. The user is able to directly put Ponder policies (in the Ponder
format) in files and ask that they get loaded into the LDAP repository.
Translation is based on the existence of templates for each type of Ponder policy.
The Policy Manager has a JESS template of rules associated with each Ponder policy.
The Policy Manager uses this template to guide the transformation of a parsed Ponder
component to a JESS rule or part of a JESS rule.
We have deployed the prototype within our laboratory and instrumented a number
of applications. Currently, the prototype works as expected.
4. Discussion
This section describes our observations and experiences.
1 Section 2 describes how an application can be instrumented. It will not always
be possible for a user or administrator to instrument an application themselves.
They may have to rely on the output of logfiles that many applications generate.
It is still possible to use the architecture of the QoS management system de-
scribed in Section 3. The coordinator refers to a process that reads the logfiles
generated by an application or uses the operating system to retrieve information
about the application behavior.
2 Currently, PEPs determine if they are authorized to carry out an action by send-
ing a message to the PDP. Potentially, the PDP could become a bottleneck. The
PEP could have a rule engine to directly determine if it can carry out the action.
This would reduce the load of the PDP, but increase the management load on the
device that the PEP is on. This is not a problem in the sense that our architecture
can easily handle this. Alternatively, multiple PDPs might be necessary. One
PDP focusses on diagnosis and the other PDP focusses on messages from the
PEP.
3 In our earlier work [11], diagnosis was distributed. This assumed that there
were management processes on each host machine as well as a ‘global’ man-
agement process for an administrative domain. Rules to diagnose the cause of a
violation of a QoS requirement were found in the local management processes
as well as the ‘global’ management process. These management processes are
similar to the PDPs. We found the translation of diagnosis policies to rules and
the deployment of those rules was more complicated. This is the result of the
translation having to take into account that if management process determines
that it cannot resolve the problem, it needs to send an event to another man-
agement process so that it might try to resolve the problem. On the otherhand,
11. Policy Specification and Architecture for Quality of Service Management
the use of one PDP results in many rules being sent to that PDP and it could
become a bottleneck. More work is needed in understanding how to optimize
the configuration of PDPs.
4 It is possible to have the different management entities interested in a specific
event to directly register with the component generating the event i.e., the gen-
erator of the event acts as its own event manager. The disadvantage is that
this assumes that these management entities know apriori where the events are
coming from. Another possiblity is to have the policy specifications explicitly
specify the source and receivers of a specific event. However, if we change the
configuration of PDPs by changing their location and/or number then the pol-
icy specification should be changed. By separating these out, we can continue
to use the same policy specification even if the underlying management system
changes.
5 It is assumed that there is agreement on attribute names. It is assumed that
the attribute names of values that are sent as part of a notify message from the
coordinator are the same as what is expected in the diagnosis policies. This
requires that policies entered are checked for consistency.
6 The policy stated in Example 5 is quite detailed in that it assumes that the admin-
istrator will specify the normalized value (specified using normalizedvalue) is
to be used to determine if there are enough CPU resources available. We be-
lieve that we can simplify this by assuming that this is part of the mapping of
the policy to rules.
7 We find the specification of expectation and administrative policies relatively
easy. Diagnostic policies are more difficult and require a good understanding of
the applications and how they behave. The administrator must specify a good
deal of information. There is a good deal of work on diagnostics (e.g., [10, 9,
7, 8]) that use a variety of techniques. There exists efficient software to support
these techniques. We believe that the diagnostics process should be able to take
advantage of these techniques and tools. This will likely require changes in the
way diagnostic policies are specified. Potentially, this could reduce the number
of rules needed by the rule engine.
8 JESS is a rule-based engine. Initially, we found that JESS had a lot of overhead
and had an impact on the application processes on the executing on the same
machine. However, this was optimized and we no longer have this problem. It
seems to be fast enough, but we do not have experience with an environment
with hundreds of applications and potentially thousands of policies. However,
in this case there would likely be more PDPs. This would distribute the load and
thus minimize the number of policies at a specific PDP.
5. Related Work
Our work involved the use of policy formalisms and the development an architec-
ture for distribution and enforcement of policies. Relatively little of this work ex-
amined all of these issues in an integrated framework that addresses end-to-end QoS
management. Our work does this. In this section, we describe the related work done in
policy specifications and architectures. We describe how we differ and how our work
relates.
12. Nathan Muruganantha, Hanan Lutfiyya
Policy Specification.
IETF is currently developing a set of standards (current drafts found in [14, 19, 22])
that includes an information model to be used for specifying policies, a standard that
extends the previous standard for specifying policies for specifying QoS policies and a
standard for mapping the information model to LDAP schemas. Informally, speaking
an IETF policy basically consists of a set of conditions and a set of actions. The
standards focus on low-level details related to policy encoding, storage and retrieval.
We have experience in using the IETF standards. We did examine the use of IETF
rules for specifying policies. IETF rules are reasonable for expectation policies, but
we found that it was much more difficult to specify the administrative and diagnostic
policies. Generally, we found easier to use. It is possible to specify policies using
Ponder and translate to the IETF format.
Other language standards by IETF, as well as other network policy languages (sum-
marized in [21]), include RPSL for describing routing policy constraints, PAX for
defining pattern matching criteria in policy-based networking devices and SRL for
creating rule sets for real-time traffic flow measurement. These languages primarily
focus on policies applied to a network device. None of this work addresses the use of
policies applied to applications.
Other policy specification languages [2, 6, 5, 15] focus on features that enable the
specification of security related policies. The Ponder Policy Specification language [3]
has a broader scope than most of the other languages in that it not only was designed
with specifying security as the primary objective, but also general management policy.
Ponder allows the administrator to use declarative statements and is independent of
underlying systems. This is the primary reason why we have chosen to use Ponder to
describe a high-level specification representation of policies.
Policy languages vary in the level of abstraction. Some languages focus on the
specification of policies that describe what is wanted, while others focus on how to
achieve what is wanted. This has resulted in different levels of policies, which has led
to several attempts at a policy classification. This includes a policy hierarchy and a
formal definition of policies defined by [24]. Our work defines two levels and would
seem to correspond to the bottom two levels of the hierarchy in [24].
Architectures, Policy Distribution and Enforcement.
There has been some recent work in service level management [17, 16] that de-
scribes an architecture and policies for management of a differentiated services net-
work so that users receive good quality of service and fulfill the service level agree-
ments. This work does not make use of administrative or diagnostic policies.
There has been quite a bit of work (e.g., [1, 18, 23, 12]) that has looked at the trans-
lation of policies to network device configurations which are then sent to the PEPs so
that the PEPs can configure the network devices. The focus is on network devices.
In most of this work, policy distribution is initiated by an administrator and focusses
on one device. Our work differs in that it focusses on applications and it is the initi-
ation of an application session that causes the distribution of polices to management
components.
A deployment model was developed by [4] for Ponder. Each policy type is com-
piled into a policy class by the Ponder compiler. The instantiation of a policy class is
a policy object. Each policy object has methods that allow a policy object to be loaded
to an enforcement agent and unloaded from an agent. Additional methods include en-
able and disabling of objects. Agents register with the event service to receive relevant
13. Policy Specification and Architecture for Quality of Service Management
events (as specified by the policies) generated from the managed objects of the sys-
tem. There is no discussion on configuration of devices or applications. The roles of
PDP and PEP is similar to that of an enforcement agent. We decided though that use
of JESS would make certain tasks easier. For example, although Ponder assumes that
the policies are independent in the sense that the order they are presented in does not
matter. We assumed that order of policies mattered when translating to JESS rules.
This meant that not every single JESS rule that is interested in a specific event has to
be executed. This is important for the following reason: Example 3 described a policy
called fps low1 that is triggered when the event fps low is generated. fps low2 is
also to be triggered when the event fps low is generated. The difference is that that
the action to be taken depends on different conditions of the state. If the conditional
statement in the when clause is true in Example 3 this implies that the fault has been
found and there is no reason to look at fps low2. JESS’s implementation allows for
this. In the Ponder deployment model, all policy objects that register for a specific
event will receive notification that the event has occurred and conditions are checked.
Generally, we found that very little work focusses on applying policies at the appli-
cation level. The difficulty with the application level is that different policies will be
needed for different sessions of the same application.
6. Conclusions
The initial deployment of the prototype has proven to be successful. The exper-
imental results are similar to that reported in [13, 11]. The prototype allows a user
to specify Ponder policies from a console and distribute them to the distributed com-
ponents of the management system successfully. This work is one of the few that
addresses the use of policies to application QoS management.
Future work includes addressing some of the issues brought up earlier in the Dis-
cussion section as well as the following:
1 We chose to specify our framework based on the IETF policy-based manage-
ment framework. We are currently working on integrating network resource
management.
2 We would like to do more work on ensuring that policy specifications are con-
sistent. This requires many changes to the mapping approach used.
3 We are examining an environment with a mix of applications such as soft IP
phones and web servers.
References
[1] M. Brunner and J. Quittek. MPLS Management using Policies. Proceedings of the 7th IEEE/IFIP
Symposium on Integrated Network Management (IM’01), Seattle USA, May 2001.
[2] F. Chen and R. Sandhu. Constraints for Role-Based Access Control. Proceedings of 1st ACM/NIST
Role Based Access Control Workshop, Gaithersburg, Maryland, USA, ACM Press, 1995.
[3] N. Damianou, N. Dalay, E. Lupu, and M. Sloman. Ponder: A Language for Specifying Security and
Management Policies for Distributed Systems: The Language Specification (Version 2.1). Technical
Report Imperial College Research Report DOC 2000/01, Imperial College of Science, Technology
and Medicine, London, England, April 2000.
[4] N Dulay, E. Lupu, M. Sloman, and N. Damianou. A Policy Deployment Model for the Ponder Lan-
guage. Proceedings of the 7th IEEE/IFIP Symposium on Integrated Network Management (IM’01),
Seattle USA, May 2001.
14. Nathan Muruganantha, Hanan Lutfiyya
[5] J. Hoagland, R. Pandey, and K. Levitt. Security Policy Specification Using a Graphical Approach.
Technical Report Technical Report CSE-98-3, UC Davis Computer Science Department, UC Davis
Computer Science Department, July 22, 1998 2002.
[6] S. Jajodia, P. Samarati, and V. Subrahmanian. A Logical Language for Expressing Authorisations.
Proceedings of the IEEE Symposium on Security and Privacy, 1997.
[7] S. Katker. A Modelling Framework for Integrated Distributed Systems Fault Management. In Pro-
ceedings IFIP/IEEE International Conference on Distributed Platforms, pages 186–198, 1996.
[8] S. Katker and H Geihs. A Generic Model for Fault Isolation in Integrated Management Systems.
Journal of Network and Systems Management, 1997.
[9] S. Katker and M. Paterok. Fault isolation and event correlation for integrated fault management. In
Proceedings of the 5th International Sypmposium on Integrated Network Management, 1997.
[10] S. Kliger, S. Yemini, Y. Yemini, D. Ohsie, and S. Stolfo. A Coding Approach to Event Correlation.
In Proceedings of the 4th International Sypmposium on Integrated Network Management, 1995.
[11] H. Lutfiyya, G. Molenkamp, M. Katchabaw, and M. Bauer. Issues in Managing Soft QoS Require-
ments in Distributed Systems Using a Policy-Based Framework. Proceedings of the Policy 2001
Workshop:International Workshop on Policies for Distributed Systems and Networks, Bristol, UK,
Springer-Verlag LNCS, pages 185–201, January 2001.
[12] P. Martinez, M. Brunner, J. Quittek, F. Strauss, J. Schoenwaelder, S. Mertens, and T. Klie. Using the
Script MIB for Policy-Based Configuration Management. Proceedings of the IFIP/IEEE Symposium
on Network Operations and Management Symposium (NOMS’02), Florence, Italy, 2002, pages 461–
468, January 2001.
[13] G. Molenkamp, H. Lutfiyya, M. Katchabaw, and M. Bauer. Resource Management to Support
Application-Specific Quality of Service. IEEE/IFIP Management of Multimedia Networks and Ser-
vices (MMNS2001), October 2001.
[14] B. Moore, J. Strassmer, and E. Elleson. Policy Core Information Model – Version 1 Specification.
Technical report, IETF, May 2000.
[15] R. Ortalo. A Flexible Method for Information System Security Policy Specificatio. Proceedings
of 5th European Symposium on Research in Computer Security (ESORICS 98), Louvain-laNeuve,
Belgium, Springer-Verlag, 1998.
[16] P. Pereira, D. Dadok, and P. Pinto. Service Level Management of Differentiated Services Networks
with Active Policies. 3rd Conferencia de Telecomunicacoes., Rio de Janeiro, Brazil, December 1999.
[17] P. Pereira and P. Pinto. Algorithms and Contracts for Network and Systems Management.
Proceedings of the 1st IEEE Latin American Network Operations and Management Symposium
(LANOMS99), Rio de Janeiro, Brazil, December 1999.
[18] Alberto Gonzalez Prieto and Marcus Brunner. SLS to DiffServ Configuration Mappings. Pro-
ceedings of the 12th International Workshop on Distributed Systems: Operations and Management
DSOM’2001, Nancy France, October 2001.
[19] Y. Snir, Y. Ramberg, J. Strassner, and R. Cohen. Policy Framework QoS Information Model. Tech-
nical report, IETF, April 2000.
[20] Startdust.com. Introduction to QoS Policies. Technical report, Stardust.com, Inc., July 1999.
[21] G. Stone, B. Lundy, and G. Xie. Network Policy Languages: A Survey and New Approaches. IEEE
Network, 15(1):10–21, January 2001.
[22] J. Strassner, E. Ellesson, B. Moore, and Ryan Moats. Policy Framework LDAP Core Schema. Tech-
nical report, November 1999.
[23] P. Trimintzios, I. Andrikopoulos, G. Pavlou, and C. Cavalcanti. An Architectural Framework for
Providing QoS in IP Differentiated Services Networks. Proceedings of the 7th IEEE/IFIP Symposium
on Integrated Network Management (IM’01), Seattle USA, May 2001.
[24] R. Wies. Polcies in Network and Systems Management – Formal Definition and Architecture. Jour-
nal of Network and Systems Management, 2(1):63–83, 1994.
[25] D. Yau and S. Lam. Adaptive Rate-Controlled Scheduling for Multimedia App lications. Proceedings
of the 1996 ACM Multimedia Conference, Boston, Massachusetts, November 1996.