This document discusses a new scheduling algorithm proposed for managing requests for context-aware software deployed in a cloud computing environment. The algorithm aims to improve the performance of servers hosting high-demand context-aware applications while reducing cloud providers' costs. It does this by classifying similar context requests and dynamically scoring requests, with the goal of processing requests for similar context data in parallel to reduce response times. The algorithm is evaluated through simulation and found to improve efficiency compared to the gi-FIFO scheduling algorithm.
A latency-aware max-min algorithm for resource allocation in cloud IJECEIAES
Cloud computing is an emerging distributed computing paradigm. However, it requires certain initiatives that need to be tailored for the cloud environment such as the provision of an on-the-fly mechanism for providing resource availability based on the rapidly changing demands of the customers. Although, resource allocation is an important problem and has been widely studied, there are certain criteria that need to be considered. These criteria include meeting user’s quality of service (QoS) requirements. High QoS can be guaranteed only if resources are allocated in an optimal manner. This paper proposes a latency-aware max-min algorithm (LAM) for allocation of resources in cloud infrastructures. The proposed algorithm was designed to address challenges associated with resource allocation such as variations in user demands and on-demand access to unlimited resources. It is capable of allocating resources in a cloud-based environment with the target of enhancing infrastructure-level performance and maximization of profits with the optimum allocation of resources. A priority value is also associated with each user, which is calculated by analytic hierarchy process (AHP). The results validate the superiority for LAM due to better performance in comparison to other state-of-the-art algorithms with flexibility in resource allocation for fluctuating resource demand patterns.
An efficient resource sharing technique for multi-tenant databases IJECEIAES
Multi-tenancy is a key component of Software as a Service (SaaS) paradigm. Multi-tenant software has gained a lot of attention in academics, research and business arena. They provide scalability and economic benefits for both cloud service providers and tenants by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model referred as MultiTenant Dynamic Resource Scheduling Model (MTDRSM) embracing a query classification and worker sorting technique enabling efficient and dynamic resource sharing among tenants. The experiments show significant performance improvement over existing model.
Role of Operational System Design in Data Warehouse Implementation: Identifyi...iosrjce
Data warehouse designing process takes input from operational system of the organization. Quality
of data warehousing solution depends on design of operational system. Often, operational system
implementations of organizations have some limitations. Thus, we cannot proceed for data warehouse
designing so easily. In this paper, we have tried to investigate operational system of the organization for
identifying such limitations and determine role of operational system design in the process of data warehouse
design and implementation. We have worked out to find possible methods to handle such limitations and have
proposed techniques to get a quality data warehousing solution under such limitations. To make the work based
on live example, National Rural Health Mission (NRHM) Project has been taken. It is a national project of
health sector, managed by Indian Government across the country. The complex structure and high volume of
data makes it an ideal case for data warehouse implementation.
Cloud computing is the fastest emerging technology and a novel buzzword in the field of IT domain that offer distinct services, applications and focuses on providing sustainable, reliable, scalable and virtualized resources to its consumer. The main aim of cloud computing is to enhance the use of distributed resources to achieve higher throughput and resource utilization in large-scale computation problems. Scheduling affects the efficiency of cloud and plays a significant role in cloud computing to create high performance environment. The Quality of Service (QoS) requirements of user application define the scheduling of resources. Numbers of researchers have tried to solve these scheduling problems using different QoS based scheduling techniques. In this paper, a detail analysis of resource scheduling methodology is presented, with different types of scheduling based on soft computing techniques, their comparisons, benefits and results are discussed. Major finding of this paper helps researchers to decide suitable approach for scheduling user’s applications considering their QoS requirements.
A SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTINGijccsa
Cloud computing is an on-demand service resource which includes applications to data centers on a
pay-per-use basis. In order to allocate these resources properly and satisfy users’ demands, an efficient
and flexible resource allocation mechanism is needed. Due to increasing user demand, the resource
allocating process has become more challenging and difficult. One of the main focuses of research
scholars is how to develop optimal solutions for this process. In this paper, a literature review on proposed
dynamic resource allocation techniques is introduced.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
Allocation Strategies of Virtual Resources in Cloud-Computing NetworksIJERA Editor
In distributed computing, Cloud computing facilitates pay per model as per user demand and requirement.
Collection of virtual machines including both computational and storage resources will form the Cloud. In
Cloud computing, the main objective is to provide efficient access to remote and geographically distributed
resources. Cloud faces many challenges, one of them is scheduling/allocation problem. Scheduling refers to a
set of policies to control the order of work to be performed by a computer system. A good scheduler adapts its
allocation strategy according to the changing environment and the type of task. In this paper we will see FCFS,
Round Robin scheduling in addition to Linear Integer Programming an approach of resource allocation.
A latency-aware max-min algorithm for resource allocation in cloud IJECEIAES
Cloud computing is an emerging distributed computing paradigm. However, it requires certain initiatives that need to be tailored for the cloud environment such as the provision of an on-the-fly mechanism for providing resource availability based on the rapidly changing demands of the customers. Although, resource allocation is an important problem and has been widely studied, there are certain criteria that need to be considered. These criteria include meeting user’s quality of service (QoS) requirements. High QoS can be guaranteed only if resources are allocated in an optimal manner. This paper proposes a latency-aware max-min algorithm (LAM) for allocation of resources in cloud infrastructures. The proposed algorithm was designed to address challenges associated with resource allocation such as variations in user demands and on-demand access to unlimited resources. It is capable of allocating resources in a cloud-based environment with the target of enhancing infrastructure-level performance and maximization of profits with the optimum allocation of resources. A priority value is also associated with each user, which is calculated by analytic hierarchy process (AHP). The results validate the superiority for LAM due to better performance in comparison to other state-of-the-art algorithms with flexibility in resource allocation for fluctuating resource demand patterns.
An efficient resource sharing technique for multi-tenant databases IJECEIAES
Multi-tenancy is a key component of Software as a Service (SaaS) paradigm. Multi-tenant software has gained a lot of attention in academics, research and business arena. They provide scalability and economic benefits for both cloud service providers and tenants by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model referred as MultiTenant Dynamic Resource Scheduling Model (MTDRSM) embracing a query classification and worker sorting technique enabling efficient and dynamic resource sharing among tenants. The experiments show significant performance improvement over existing model.
Role of Operational System Design in Data Warehouse Implementation: Identifyi...iosrjce
Data warehouse designing process takes input from operational system of the organization. Quality
of data warehousing solution depends on design of operational system. Often, operational system
implementations of organizations have some limitations. Thus, we cannot proceed for data warehouse
designing so easily. In this paper, we have tried to investigate operational system of the organization for
identifying such limitations and determine role of operational system design in the process of data warehouse
design and implementation. We have worked out to find possible methods to handle such limitations and have
proposed techniques to get a quality data warehousing solution under such limitations. To make the work based
on live example, National Rural Health Mission (NRHM) Project has been taken. It is a national project of
health sector, managed by Indian Government across the country. The complex structure and high volume of
data makes it an ideal case for data warehouse implementation.
Cloud computing is the fastest emerging technology and a novel buzzword in the field of IT domain that offer distinct services, applications and focuses on providing sustainable, reliable, scalable and virtualized resources to its consumer. The main aim of cloud computing is to enhance the use of distributed resources to achieve higher throughput and resource utilization in large-scale computation problems. Scheduling affects the efficiency of cloud and plays a significant role in cloud computing to create high performance environment. The Quality of Service (QoS) requirements of user application define the scheduling of resources. Numbers of researchers have tried to solve these scheduling problems using different QoS based scheduling techniques. In this paper, a detail analysis of resource scheduling methodology is presented, with different types of scheduling based on soft computing techniques, their comparisons, benefits and results are discussed. Major finding of this paper helps researchers to decide suitable approach for scheduling user’s applications considering their QoS requirements.
A SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTINGijccsa
Cloud computing is an on-demand service resource which includes applications to data centers on a
pay-per-use basis. In order to allocate these resources properly and satisfy users’ demands, an efficient
and flexible resource allocation mechanism is needed. Due to increasing user demand, the resource
allocating process has become more challenging and difficult. One of the main focuses of research
scholars is how to develop optimal solutions for this process. In this paper, a literature review on proposed
dynamic resource allocation techniques is introduced.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
Allocation Strategies of Virtual Resources in Cloud-Computing NetworksIJERA Editor
In distributed computing, Cloud computing facilitates pay per model as per user demand and requirement.
Collection of virtual machines including both computational and storage resources will form the Cloud. In
Cloud computing, the main objective is to provide efficient access to remote and geographically distributed
resources. Cloud faces many challenges, one of them is scheduling/allocation problem. Scheduling refers to a
set of policies to control the order of work to be performed by a computer system. A good scheduler adapts its
allocation strategy according to the changing environment and the type of task. In this paper we will see FCFS,
Round Robin scheduling in addition to Linear Integer Programming an approach of resource allocation.
Hybrid Based Resource Provisioning in CloudEditor IJCATR
The data centres and energy consumption characteristics of the various machines are often noted with different capacities.
The public cloud workloads of different priorities and performance requirements of various applications when analysed we had noted
some invariant reports about cloud. The Cloud data centres become capable of sensing an opportunity to present a different program.
In out proposed work, we are using a hybrid method for resource provisioning in data centres. This method is used to allocate the
resources at the working conditions and also for the energy stored in the power consumptions. Proposed method is used to allocate the
process behind the cloud storage.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of cloud applications and analyse the impact of resource management and
scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of
ABSTRACT
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time
A survey on various resource allocation policies in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Efficient Resource Sharing In Cloud Using Neural NetworkIJERA Editor
In cloud computing, collaborative cloud computing(CCC) is the emerging technology where globally-dispersed cloud resource belonging to different organization are collectively used in a cooperative manner to provide services. In previous research, Harmony enables a node to locate its desired resources and also find the reputation of the located resources, so that a client can choose resource providers not only by resource availability but also by the provider’s reputation of providing the resource. In proposed system to reform resource utilization based on optimal time period to allocate resources to the neural network training and to load factor calculation the dynamic priority scheduling technique is used to assign the priority to the cloud users according to their load. The dynamic priority scheduling algorithm strikes the right balance between performance and power efficiency.
ADVANCES IN HIGHER EDUCATIONAL RESOURCE SHARING AND CLOUD SERVICES FOR KSAIJCSES Journal
Cloud represents an important change in the way information technology is used. Cloud makes it possible
to access work anywhere anytime and to share it with anyone [1]. It is changing the way people
communicate, work and learn [2]. In this changing environment, it is important to think about the
opportunities and risks of using the cloud in the education field, and the lessons we can learn from the
previous uses of this technology in the education field. In order to gain the benefits of the cloud to be used
in educational system in KSA, a comprehensive study on scientific literatures in this paper. This paper also
presents the significant information such as the findings, the case studies, related frameworks and
supporting also the tools associated to the migration of organizational resources to cloud
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
The aim of cloud computing is to share a large number of resources and pieces of equipment to compute and store knowledge and information for great scientific sources. Therefore, the scheduling algorithm is regarded as one of the most important challenges and problems in the cloud. To solve the task scheduling problem in this study, the ant colony optimization (ACO) algorithm was adapted from social theories with a fair and accurate resource allocation approach based on machine performance and capacity. This study was intended to decrease the runtime and executive costs. It was also meant to optimize the use of machines and reduce their idle time. Finally, the proposed method was compared with Berger and greedy algorithms. The simulation results indicate that the proposed algorithm reduced the makespan and executive cost when tasks were added. It also increased fairness and load balancing. Moreover, it made the optimal use of machines possible and increased user satisfaction. According to evaluations, the proposed algorithm improved the makespan by 80%.
SERVICE ORIENTED QUALITY REQUIREMENT FRAMEWORK FOR CLOUD COMPUTINGijcsit
This research paper introduces a framework to identify the quality requirements of cloud computing services. It considered two dominant sub-layers; functional layer and runtime layer against cloud characteristics. SERVQUAL model attributes and the opinions of the industry experts were used to derive the quality constructs in cloud computing environment. The framework gives proper identification of cloud computing service quality expectations of users. The validity of the framework was evaluated by using
questionnaire based survey. Partial least squares-structural equation modelling (PLS-SEM) technique was
used to evaluate the outcome. The research findings shows that the significance of functional layer is
higher than runtime layer and prioritized quality factors of two layers are Service time, Information and
data security, Recoverability, Service Transparency, and Accessibility.
An Overview of Workflow Management on Mobile Agent TechnologyIJERA Editor
Mobile agent workflow management/plugins is quite appropriate to handle control flows in open distributed system; basically it is the emerging technology which can bring the process oriented tasks to run as a single unit from diverse frameworks. This workflow technology offers organizations the opportunity to reshape business processes beyond the boundaries of their own organizations so that instead of static models, modern era incurring dynamic workflows which can respond the changes during its execution, provide necessary security measures, great degree of adaptivity, troubleshoot the running processes and recovery of lost states through fault tolerance. The prototype that we are planning to design makes sure to hold reliability, security, robustness, scalability without being forced to make tradeoffs the performance. This paper is concerned with design, implementation and evaluation of performance on the improved methods of proposed prototype models based on current research in this domain.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
A LIGHT-WEIGHT DISTRIBUTED SYSTEM FOR THE PROCESSING OF REPLICATED COUNTER-LI...ijdpsjournal
In order to increase availability in a distributed system some or all of the data items are replicated and
stored at separate sites. This is an issue of key concern especially since there is such a proliferation of
wireless technologies and mobile users. However, the concurrent processing of transactions at separate
sites can generate inconsistencies in the stored information. We have built a distributed service that
manages updates to widely deployed counter-like replicas. There are many heavy-weight distributed
systems targeting large information critical applications. Our system is intentionally, relatively lightweight
and useful for the somewhat reduced information critical applications. The service is built on our
distributed concurrency control scheme which combines optimism and pessimism in the processing of
transactions. The service allows a transaction to be processed immediately (optimistically) at any
individual replica as long as the transaction satisfies a cost bound. All transactions are also processed in a
concurrent pessimistic manner to ensure mutual consistency
HIGHLY SCALABLE, PARALLEL AND DISTRIBUTED ADABOOST ALGORITHM USING LIGHT WEIG...ijdpsjournal
AdaBoost is an important algorithm in machine learning and is being widely used in object detection.
AdaBoost works by iteratively selecting the best amongst weak classifiers, and then combines several weak
classifiers to obtain a strong classifier. Even though AdaBoost has proven to be very effective, its learning
execution time can be quite large depending upon the application e.g., in face detection, the learning time
can be several days. Due to its increasing use in computer vision applications, the learning time needs to
be drastically reduced so that an adaptive near real time object detection system can be incorporated. In
this paper, we develop a hybrid parallel and distributed AdaBoost algorithm that exploits the multiple
cores in a CPU via light weight threads, and also uses multiple machines via a web service software
architecture to achieve high scalability. We present a novel hierarchical web services based distributed
architecture and achieve nearly linear speedup up to the number of processors available to us. In
comparison with the previously published work, which used a single level master-slave parallel and
distributed implementation [1] and only achieved a speedup of 2.66 on four nodes, we achieve a speedup of
95.1 on 31 workstations each having a quad-core processor, resulting in a learning time of only 4.8
seconds per feature.
Spectrum requirement estimation for imt systems in developing countriesijdpsjournal
In this paper
we analyze the methodology develope
d by
the
International Telecommunication Union (ITU)
for
estimat
ing
the spectrum requirement for International Mobile Telecommunications (IMT) systems.
The
International
Telecommunication Union estimates spectrum requirement
s
by following ITU
-
R
-
Rec.M1768.
Although
this
methodology is adopted by ITU
-
R
,
there are
discrepancies
for
estimat
ing
the spectrum
requirement for developing countries. ITU estimates the spectrum requirement by considering technica
l
and market
parameters
that
were provided by the
most de
veloped countries with high income and high
development index
. Developed countries have
a very rapid expansible telecom
market
due to the high level
of penetration
,
dominant
user density
and usage of high
-
volume multimedia services.
In contrast,
developing
countries
use less bandwidth
-
intensive services such as voice communication,
low rate data
,
low and medium multimedia.
However,
while
the input parameters are adequate for developed countries
,
they
do not reflect the status of developing countries. For th
is reason
the
ITU spectrum estimation
overestimate
s
the exact requirement
s
of spectrum for IMT systems for developing countries. This paper
presents an approach based on the technical and market related parameters, which is thought to be
applicable
for
ove
rcom
ing
the shortcomings of
the
current ITU methodology in estimatin
g
the spectrum
requirement
for developing
countries like Bangladesh
Advanced delay reduction algorithm based on GPS with Load Balancingijdpsjournal
A Mobile Ad-Hoc Network (MANET) is a self-configuring network of mobile nodes connected by wireless
links, to form an arbitrary topology. The nodes are free to move arbitrarily in the topology. Thus, the
network's wireless topology may be random and may change quickly. An ad Hoc network is formed by
sensor networks consisting of sensing, data processing, and communication components. There is frequent
occurrence of congested links in such a network as wireless links inherently have significantly lower
capacity than hardwired links and are therefore more prone to congestion. Here we proposed a algorithm
which involves the reduction in the delay with the help of Request_set created on the basis of the location
information of the destination node. Across the paths found in the Route_reply (RREP) packets the load is
equally distributed
On the fly porn video blocking using distributed multi gpu and data mining ap...ijdpsjournal
Preventing users from accessing adult videos and at the same time allowing them to access good
educational videos and other materials through campus wide network is a big challenge for organizations.
Major existing web filtering systems are textual content or link analysis based. As a result, potential users
cannot access qualitative and informative video content which is available online. Adult content detection
in video based on motion features or skin detection requires significant computing power and time.
Judgment to identify pornography videos is taken based on processing of every chunk from video,
consisting specific number of frames, sequentially one after another. This solution is not feasible in real
time when user has started watching the video and decision about blocking needs to be taken within few
seconds.
In this paper, we propose a model where user is allowed to start watching any video; at the backend porn
detection process using extracted video and image features shall run on distributed nodes with multiple
GPUs (Graphics Processing Units). The video is processed on parallel and distributed platform in shortest
time and decision about filtering the video is taken in real time. Track record of blocked content and
websites is cached, too. For every new video downloads, cache is verified to prevent repetitive content
analysis. On the fly blocking is feasible due to latest GPU architecture, CUDA (Compute Unified Device
Architecture) and CUDA aware MPI (Message Passing Interface). It is possible to achieve coarse grained
as well as fine grained parallelism. Video Chunks are processed parallel on distributed nodes. Porn
detection algorithm on frames of chunks of videos can also achieve parallelism using GPUs on single node.
It ultimately results into blocking porn video on the fly and allowing educational and informative videos.
Derivative threshold actuation for single phase wormhole detection with reduc...ijdpsjournal
Communication in mobile Ad hoc networks is completed via multi
-
hop ways. Owing to the distributed
specification and restricted resource of nodes, MANET is a lot prone
to wormhole attacks i.e. wormhole
attacks place severe threats to each Ad hoc routing protocol and a few security enhancements. Thus,
so as
to discover wormholes, totally different techniques are in use. In all those techniques fixation of
threshold
is mer
ely by trial & error methodology or by random manner. Conjointly wormhole detection is in twin
part by putting the nodes that is higher than the edge in a suspicious set, however predicting the n
ode as a
wormhole by using some other algorithms. Our aim in
this paper is to deduce the traffic threshold level by
derivational approach for identifying wormholes in a very single phase in relay network having dissi
milar
characteristics.
Hybrid Based Resource Provisioning in CloudEditor IJCATR
The data centres and energy consumption characteristics of the various machines are often noted with different capacities.
The public cloud workloads of different priorities and performance requirements of various applications when analysed we had noted
some invariant reports about cloud. The Cloud data centres become capable of sensing an opportunity to present a different program.
In out proposed work, we are using a hybrid method for resource provisioning in data centres. This method is used to allocate the
resources at the working conditions and also for the energy stored in the power consumptions. Proposed method is used to allocate the
process behind the cloud storage.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of cloud applications and analyse the impact of resource management and
scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of
ABSTRACT
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time
A survey on various resource allocation policies in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Efficient Resource Sharing In Cloud Using Neural NetworkIJERA Editor
In cloud computing, collaborative cloud computing(CCC) is the emerging technology where globally-dispersed cloud resource belonging to different organization are collectively used in a cooperative manner to provide services. In previous research, Harmony enables a node to locate its desired resources and also find the reputation of the located resources, so that a client can choose resource providers not only by resource availability but also by the provider’s reputation of providing the resource. In proposed system to reform resource utilization based on optimal time period to allocate resources to the neural network training and to load factor calculation the dynamic priority scheduling technique is used to assign the priority to the cloud users according to their load. The dynamic priority scheduling algorithm strikes the right balance between performance and power efficiency.
ADVANCES IN HIGHER EDUCATIONAL RESOURCE SHARING AND CLOUD SERVICES FOR KSAIJCSES Journal
Cloud represents an important change in the way information technology is used. Cloud makes it possible
to access work anywhere anytime and to share it with anyone [1]. It is changing the way people
communicate, work and learn [2]. In this changing environment, it is important to think about the
opportunities and risks of using the cloud in the education field, and the lessons we can learn from the
previous uses of this technology in the education field. In order to gain the benefits of the cloud to be used
in educational system in KSA, a comprehensive study on scientific literatures in this paper. This paper also
presents the significant information such as the findings, the case studies, related frameworks and
supporting also the tools associated to the migration of organizational resources to cloud
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
The aim of cloud computing is to share a large number of resources and pieces of equipment to compute and store knowledge and information for great scientific sources. Therefore, the scheduling algorithm is regarded as one of the most important challenges and problems in the cloud. To solve the task scheduling problem in this study, the ant colony optimization (ACO) algorithm was adapted from social theories with a fair and accurate resource allocation approach based on machine performance and capacity. This study was intended to decrease the runtime and executive costs. It was also meant to optimize the use of machines and reduce their idle time. Finally, the proposed method was compared with Berger and greedy algorithms. The simulation results indicate that the proposed algorithm reduced the makespan and executive cost when tasks were added. It also increased fairness and load balancing. Moreover, it made the optimal use of machines possible and increased user satisfaction. According to evaluations, the proposed algorithm improved the makespan by 80%.
SERVICE ORIENTED QUALITY REQUIREMENT FRAMEWORK FOR CLOUD COMPUTINGijcsit
This research paper introduces a framework to identify the quality requirements of cloud computing services. It considered two dominant sub-layers; functional layer and runtime layer against cloud characteristics. SERVQUAL model attributes and the opinions of the industry experts were used to derive the quality constructs in cloud computing environment. The framework gives proper identification of cloud computing service quality expectations of users. The validity of the framework was evaluated by using
questionnaire based survey. Partial least squares-structural equation modelling (PLS-SEM) technique was
used to evaluate the outcome. The research findings shows that the significance of functional layer is
higher than runtime layer and prioritized quality factors of two layers are Service time, Information and
data security, Recoverability, Service Transparency, and Accessibility.
An Overview of Workflow Management on Mobile Agent TechnologyIJERA Editor
Mobile agent workflow management/plugins is quite appropriate to handle control flows in open distributed system; basically it is the emerging technology which can bring the process oriented tasks to run as a single unit from diverse frameworks. This workflow technology offers organizations the opportunity to reshape business processes beyond the boundaries of their own organizations so that instead of static models, modern era incurring dynamic workflows which can respond the changes during its execution, provide necessary security measures, great degree of adaptivity, troubleshoot the running processes and recovery of lost states through fault tolerance. The prototype that we are planning to design makes sure to hold reliability, security, robustness, scalability without being forced to make tradeoffs the performance. This paper is concerned with design, implementation and evaluation of performance on the improved methods of proposed prototype models based on current research in this domain.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
A LIGHT-WEIGHT DISTRIBUTED SYSTEM FOR THE PROCESSING OF REPLICATED COUNTER-LI...ijdpsjournal
In order to increase availability in a distributed system some or all of the data items are replicated and
stored at separate sites. This is an issue of key concern especially since there is such a proliferation of
wireless technologies and mobile users. However, the concurrent processing of transactions at separate
sites can generate inconsistencies in the stored information. We have built a distributed service that
manages updates to widely deployed counter-like replicas. There are many heavy-weight distributed
systems targeting large information critical applications. Our system is intentionally, relatively lightweight
and useful for the somewhat reduced information critical applications. The service is built on our
distributed concurrency control scheme which combines optimism and pessimism in the processing of
transactions. The service allows a transaction to be processed immediately (optimistically) at any
individual replica as long as the transaction satisfies a cost bound. All transactions are also processed in a
concurrent pessimistic manner to ensure mutual consistency
HIGHLY SCALABLE, PARALLEL AND DISTRIBUTED ADABOOST ALGORITHM USING LIGHT WEIG...ijdpsjournal
AdaBoost is an important algorithm in machine learning and is being widely used in object detection.
AdaBoost works by iteratively selecting the best amongst weak classifiers, and then combines several weak
classifiers to obtain a strong classifier. Even though AdaBoost has proven to be very effective, its learning
execution time can be quite large depending upon the application e.g., in face detection, the learning time
can be several days. Due to its increasing use in computer vision applications, the learning time needs to
be drastically reduced so that an adaptive near real time object detection system can be incorporated. In
this paper, we develop a hybrid parallel and distributed AdaBoost algorithm that exploits the multiple
cores in a CPU via light weight threads, and also uses multiple machines via a web service software
architecture to achieve high scalability. We present a novel hierarchical web services based distributed
architecture and achieve nearly linear speedup up to the number of processors available to us. In
comparison with the previously published work, which used a single level master-slave parallel and
distributed implementation [1] and only achieved a speedup of 2.66 on four nodes, we achieve a speedup of
95.1 on 31 workstations each having a quad-core processor, resulting in a learning time of only 4.8
seconds per feature.
Spectrum requirement estimation for imt systems in developing countriesijdpsjournal
In this paper
we analyze the methodology develope
d by
the
International Telecommunication Union (ITU)
for
estimat
ing
the spectrum requirement for International Mobile Telecommunications (IMT) systems.
The
International
Telecommunication Union estimates spectrum requirement
s
by following ITU
-
R
-
Rec.M1768.
Although
this
methodology is adopted by ITU
-
R
,
there are
discrepancies
for
estimat
ing
the spectrum
requirement for developing countries. ITU estimates the spectrum requirement by considering technica
l
and market
parameters
that
were provided by the
most de
veloped countries with high income and high
development index
. Developed countries have
a very rapid expansible telecom
market
due to the high level
of penetration
,
dominant
user density
and usage of high
-
volume multimedia services.
In contrast,
developing
countries
use less bandwidth
-
intensive services such as voice communication,
low rate data
,
low and medium multimedia.
However,
while
the input parameters are adequate for developed countries
,
they
do not reflect the status of developing countries. For th
is reason
the
ITU spectrum estimation
overestimate
s
the exact requirement
s
of spectrum for IMT systems for developing countries. This paper
presents an approach based on the technical and market related parameters, which is thought to be
applicable
for
ove
rcom
ing
the shortcomings of
the
current ITU methodology in estimatin
g
the spectrum
requirement
for developing
countries like Bangladesh
Advanced delay reduction algorithm based on GPS with Load Balancingijdpsjournal
A Mobile Ad-Hoc Network (MANET) is a self-configuring network of mobile nodes connected by wireless
links, to form an arbitrary topology. The nodes are free to move arbitrarily in the topology. Thus, the
network's wireless topology may be random and may change quickly. An ad Hoc network is formed by
sensor networks consisting of sensing, data processing, and communication components. There is frequent
occurrence of congested links in such a network as wireless links inherently have significantly lower
capacity than hardwired links and are therefore more prone to congestion. Here we proposed a algorithm
which involves the reduction in the delay with the help of Request_set created on the basis of the location
information of the destination node. Across the paths found in the Route_reply (RREP) packets the load is
equally distributed
On the fly porn video blocking using distributed multi gpu and data mining ap...ijdpsjournal
Preventing users from accessing adult videos and at the same time allowing them to access good
educational videos and other materials through campus wide network is a big challenge for organizations.
Major existing web filtering systems are textual content or link analysis based. As a result, potential users
cannot access qualitative and informative video content which is available online. Adult content detection
in video based on motion features or skin detection requires significant computing power and time.
Judgment to identify pornography videos is taken based on processing of every chunk from video,
consisting specific number of frames, sequentially one after another. This solution is not feasible in real
time when user has started watching the video and decision about blocking needs to be taken within few
seconds.
In this paper, we propose a model where user is allowed to start watching any video; at the backend porn
detection process using extracted video and image features shall run on distributed nodes with multiple
GPUs (Graphics Processing Units). The video is processed on parallel and distributed platform in shortest
time and decision about filtering the video is taken in real time. Track record of blocked content and
websites is cached, too. For every new video downloads, cache is verified to prevent repetitive content
analysis. On the fly blocking is feasible due to latest GPU architecture, CUDA (Compute Unified Device
Architecture) and CUDA aware MPI (Message Passing Interface). It is possible to achieve coarse grained
as well as fine grained parallelism. Video Chunks are processed parallel on distributed nodes. Porn
detection algorithm on frames of chunks of videos can also achieve parallelism using GPUs on single node.
It ultimately results into blocking porn video on the fly and allowing educational and informative videos.
Derivative threshold actuation for single phase wormhole detection with reduc...ijdpsjournal
Communication in mobile Ad hoc networks is completed via multi
-
hop ways. Owing to the distributed
specification and restricted resource of nodes, MANET is a lot prone
to wormhole attacks i.e. wormhole
attacks place severe threats to each Ad hoc routing protocol and a few security enhancements. Thus,
so as
to discover wormholes, totally different techniques are in use. In all those techniques fixation of
threshold
is mer
ely by trial & error methodology or by random manner. Conjointly wormhole detection is in twin
part by putting the nodes that is higher than the edge in a suspicious set, however predicting the n
ode as a
wormhole by using some other algorithms. Our aim in
this paper is to deduce the traffic threshold level by
derivational approach for identifying wormholes in a very single phase in relay network having dissi
milar
characteristics.
Survey comparison estimation of various routing protocols in mobile ad hoc ne...ijdpsjournal
MANET is
an autonomous system of mobile nodes attached by wireless links. It represents
a complex and
dynamic distributed systems that consist of mobile wireless nodes that can freely self organize into
an ad
-
hoc network topology. The devices in the network may hav
e limited transmission
range therefore multiple
hops may be needed by one node to transfer data to another node in network. This leads to the need f
or an
effective routing protocol. In this paper we study various classifications of routing protocols and
th
eir types
for wireless mobile ad
-
hoc networks like DSDV, GSR, AODV, DSR, ZRP, FSR, CGSR, LAR, and Geocast
Protocols. In this paper we also compare different routing proto
cols on based on a given set of
parameters
Scalability, Latency, Bandwidth, Control
-
ov
erhead, Mobility impact
Design Of Elliptic Curve Crypto Processor with Modified Karatsuba Multiplier ...ijdpsjournal
ECDSA stands for “Elliptic Curve Digital Signature Algorithm”, it’s used to create a digital signature of
data (a file for example) in order to allow you to verify its authenticity without compromising its security.
This paper presents the architecture of finite field multiplication. The proposed multiplier is hybrid
Karatsuba multiplier used in this processor. For multiplicative inverse we choose the Itoh-Tsujii
Algorithm(ITA). This work presents the design of high performance elliptic curve crypto processor(ECCP)
for an elliptic curve over the finite field GF(2^233). The curve which we choose is the standard curve for
the digital signature. The processor is synthesized for Xilinx FPGA.
Crypto multi tenant an environment of secure computing using cloud sqlijdpsjournal
Today’s most modern research area of computing is cloud comput
ing due to its ability to diminish the costs
associated with virtualization, high availability, dynamic resource pools and increases the efficien
cy of
computing. But still it contains some drawbacks such as privacy, security, etc. This paper is thorou
ghly
focused on the security of data of multi tenant model obtains from the virtualization feature of clo
ud
computing. We use AES
-
128 bit algorithm and cloud SQL to protect sensitive data before storing in the
cloud. When the authorized customer arises for usag
e of data, then data firstly decrypted after that
provides to the customer. Multi tenant infrastructure is supported by Google, which prefers pushing
of
contents in short iteration cycle. As the customer is distributed and their demands can arise anywhe
re,
anytime so data can’t store at particular site it must be available different sites also. For this f
aster
accessing by different users from different places Google is the best one. To get high reliability a
nd
availability data is stored in encrypted befor
e storing in database and updated every time after usage. It is
very easy to use without requiring any software. This authenticate user can recover their encrypted
and
decrypted data, afford efficient and data storage security in the cloud.
SURVEY ON QOE\QOS CORRELATION MODELS FORMULTIMEDIA SERVICESijdpsjournal
This paper presents a brief review of some existing correlation models which attempt to map Quality of
Service (QoS) to Quality of Experience (QoE) for multimedia services. The term QoS refers to deterministic
network behaviour, so that data can be transported with a minimum of packet loss, delay and maximum
bandwidth. QoE is a subjective measure that involves human dimensions; it ties together user perception,
expectations, and experience of the application and network performance. The Holy Grail of subjective
measurement is to predict it from the objective measurements; in other words predict QoE from a given set
of QoS parameters or vice versa. Whilst there are many quality models for multimedia, most of them are
only partial solutions to predicting QoE from a given QoS. This contribution analyses a number of previous
attempts and optimisation techniquesthat can reliably compute the weighting coefficients for the QoS/QoE
mapping.
Implementing database lookup method in mobile wimax for location management a...ijdpsjournal
The mobile WiMAX plays a vital role in accessing the delay sensitive audio, video streaming and mobi
le
IPTV. To minimize the handover delay, a Location
Management Area (LMA) based Multicast and
Broadcast Service (MBS) zone is established. The handover delay is increased based on the size of th
e MBS
zone. In this paper, Location Management Area is easily identified by using Database Lookup Method t
o
obtai
n efficient bandwidth utilization along with reduced handover delay and increased throughput. The
handover delay and throughput is calculated by implementing this scenario in OPNET tool.
BREAST CANCER DIAGNOSIS USING MACHINE LEARNING ALGORITHMS –A SURVEYijdpsjournal
Breast cancer has become a common factor now-a-days. Despite the fact, not all general hospitals
have the facilities to diagnose breast cancer through mammograms. Waiting for diagnosing a breast
cancer for a long time may increase the possibility of the cancer spreading. Therefore a computerized
breast cancer diagnosis has been developed to reduce the time taken to diagnose the breast cancer and
reduce the death rate. This paper summarizes the survey on breast cancer diagnosis using various machine
learning algorithms and methods, which are used to improve the accuracy of predicting cancer. This survey
can also help us to know about number of papers that are implemented to diagnose the breast cancer.
STUDY OF VARIOUS FACTORS AFFECTING PERFORMANCE OF MULTI-CORE PROCESSORSijdpsjournal
Advances in Integrated Circuit processing allow for more microprocessor design options. As Chip Multiprocessor system (CMP) become the predominant topology for leading microprocessors, critical components of the system are now integrated on a single chip. This enables sharing of computation resources that was not previously possible. In addition the virtualization of these computation resources exposes the system to a mix of diverse and competing workloads. On chip Cache memory is a resource of primary concern as it can be dominant in controlling overall throughput. This Paper presents analysis of various parameters affecting the performance of Multi-core Architectures like varying the number of cores, changes L2 cache size, further we have varied directory size from 64 to 2048 entries on a 4 node, 8 node 16 node and 64 node Chip multiprocessor which in turn presents an open area of research on multicore processors with private/shared last level cache as the future trend seems to be towards tiled architecture executing multiple parallel applications with optimized silicon area utilization and excellent performance.
Target Detection System (TDS) for Enhancing Security in Ad hoc Networkijdpsjournal
The idea of an ad hoc network is a new pattern that allows mobile hosts (nodes) to converse without relying
on a predefined communications to keep the network connected. Most nodes are implicit to be mobile and
communication is implicit to be wireless. Ad-hoc networks are collaborative in the sense that each node is
assumed to relay packets for other nodes that will in return relay their packets. Thus all nodes in an ad-hoc
network form part of the network’s routing infrastructure. The mobility of nodes in an ad-hoc network
denotes that both the public and the topology of the network are extremely active. It is very difficult to
design a once-for-all target detection system. Instead, an incremental enrichment strategy may be more
feasible. A safe and sound protocol should at least include mechanisms against known assault types. In
addition, it should provide a system to easily add new security features in the future. Due to the
significance of MANET routing protocols, we focus on the recognition of attacks targeted at MANET
routing protocols.
Intrusion detection techniques for cooperation of node in MANET have been chosen as the security
parameter. This includes Watchdog and Path rater approach. It also nearby Reputation Based Schemes in
which Reputation concerning every node is measured and will be move to every node in network.
Reputation is defined as Someone’s donation to network operation. CONFIDANT [23], CORE [25],
OCEAN [24] schemes are analyzed and will be here also compared based on various parameters.
EFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALL...ijdpsjournal
In the area of Computer Science, Parallel job scheduling is an important field of research. Finding a best
suitable processor on the high performance or cluster computing for user submitted jobs plays an
important role in measuring system performance. A new scheduling technique called communication aware
scheduling is devised and is capable of handling serial jobs, parallel jobs, mixed jobs and dynamic jobs.
This work focuses the comparison of communication aware scheduling with the available parallel job
scheduling techniques and the experimental results show that communication aware scheduling performs
better when compared to the available parallel job scheduling techniques.
The concept of Genetic algorithm is specifically useful in load balancing for best virtual
machines distribution across servers. In this paper, we focus on load balancing and also on
efficient use of resources to reduce the energy consumption without degrading cloud
performance. Cloud computing is an on demand service in which shared resources, information,
software and other devices are provided according to the clients requirement at specific time. It‟s
a term which is generally used in case of Internet. The whole Internet can be viewed as a cloud.
Capital and operational costs can be cut using cloud computing. Cloud computing is defined as a
large scale distributed computing paradigm that is driven by economics of scale in which a pool
of abstracted virtualized dynamically scalable , managed computing power ,storage , platforms
and services are delivered on demand to external customer over the internet. cloud computing is
a recent field in the computational intelligence techniques which aims at surmounting the
computational complexity and provides dynamically services using very large scalable and
virtualized resources over the Internet. It is defined as a distributed system containing a
collection of computing and communication resources located in distributed data enters which
are shared by several end users. It has widely been adopted by the industry, though there are
many existing issues like Load Balancing, Virtual Machine Migration, Server Consolidation,
Energy Management, etc.
A SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTINGijccsa
Cloud computing is an on-demand service resource which includes applications to data centers on a
pay-per-use basis. In order to allocate these resources properly and satisfy users’ demands, an efficient
and flexible resource allocation mechanism is needed. Due to increasing user demand, the resource
allocating process has become more challenging and difficult. One of the main focuses of research
scholars is how to develop optimal solutions for this process. In this paper, a literature review on proposed
dynamic resource allocation techniques is introduced.
A Survey on Resource Allocation in Cloud Computingneirew J
Cloud computing is an on-demand service resource which includes applications to data centers on a
pay-per-use basis. In order to allocate these resources properly and satisfy users’ demands, an efficient
and flexible resource allocation mechanism is needed. Due to increasing user demand, the resource
allocating process has become more challenging and difficult. One of the main focuses of research
scholars is how to develop optimal solutions for this process. In this paper, a literature review on proposed
dynamic resource allocation techniques is introduced.
A survey on various resource allocation policies in cloud computing environmenteSAT Journals
Abstract Cloud computing is bringing a revolution in computing environment replacing traditional software installations, licensing issues into complete on-demand services through internet. In Cloud computing multiple cloud users can request number of cloud services simultaneously. So there must be a provision that all resources are made available to requesting user in efficient manner to satisfy their need. Resource allocation is based on quality of service and service level agreement. In cloud computing environment, to allocate resources to the user there are several methods but provider should consider the efficient way to guarantee that the applications’ requirements are attended to correctly and satisfy the user’s need This paper survey different resource allocation policies used in cloud computing environment. Keywords: Cloud computing, Resource allocation
Support for Goal Oriented Requirements Engineering in Elastic Cloud Applicationszillesubhan
Businesses have already started to exploit potential uses of cloud computing as a new paradigm for promoting their services. Although the general concepts they practically focus on are: viability, survivability, adaptability, etc., however, on the ground, there is still a lack for forming mechanisms to sustain viability with adaptation of new requirements in cloud-based applications. This has inspired a pressing need to adopt new methodologies and abstract models which support system acquisition for self-adaptation, thus guaranteeing autonomic cloud application behavior. This paper relies over state-of-the-art Neptune framework as runtime adaptive software development environment supported with intention-oriented modeling language in the representation and adaptation of goal based model artifacts and their intrinsic properties requirements. Such an approach will in turn support distributed service based applications virtually over the cloud to sustain a self-adaptive behavior with respect to its functional and non-functional characteristics.
On the Optimal Allocation of VirtualResources in Cloud Compu.docxhopeaustin33688
On the Optimal Allocation of Virtual
Resources in Cloud Computing Networks
Chrysa Papagianni, Aris Leivadeas, Symeon Papavassiliou,
Vasilis Maglaris, Cristina Cervelló-Pastor, and �Alvaro Monje
Abstract—Cloud computing builds upon advances on virtualization and distributed computing to support cost-efficient usage of
computing resources, emphasizing on resource scalability and on demand services. Moving away from traditional data-center oriented
models, distributed clouds extend over a loosely coupled federated substrate, offering enhanced communication and computational
services to target end-users with quality of service (QoS) requirements, as dictated by the future Internet vision. Toward facilitating the
efficient realization of such networked computing environments, computing and networking resources need to be jointly treated and
optimized. This requires delivery of user-driven sets of virtual resources, dynamically allocated to actual substrate resources within
networked clouds, creating the need to revisit resource mapping algorithms and tailor them to a composite virtual resource mapping
problem. In this paper, toward providing a unified resource allocation framework for networked clouds, we first formulate the optimal
networked cloud mapping problem as a mixed integer programming (MIP) problem, indicating objectives related to cost efficiency of
the resource mapping procedure, while abiding by user requests for QoS-aware virtual resources. We subsequently propose a method
for the efficient mapping of resource requests onto a shared substrate interconnecting various islands of computing resources, and
adopt a heuristic methodology to address the problem. The efficiency of the proposed approach is illustrated in a simulation/emulation
environment, that allows for a flexible, structured, and comparative performance evaluation. We conclude by outlining a proof-of-
concept realization of our proposed schema, mounted over the European future Internet test-bed FEDERICA, a resource virtualization
platform augmented with network and computing facilities.
Index Terms—Federated infrastructures, resource allocation, resource mapping, virtualization, cloud computing, quality of service
Ç
1 INTRODUCTION
CLOUD computing promises reliable services deliveredthrough next generation data centers that are built on
compute and storage virtualization technologies. According
to Buyya et al., [1] “a cloud is a type of parallel and distributed
system consisting of a collection of interconnected and virtualized
computers that are dynamically provisioned and presented as one
or more unified computing resources based on service-level
agreements established through negotiation between the service
provider and the consumers” and accessible as a composable
service via web 2.0 technologies.
Therefore, with respect to cloud computing there exist
the “as a service” definitions, which include software as a
service (SaaS), infrastructure as a se.
An Efficient Queuing Model for Resource Sharing in Cloud Computingtheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
An Algorithm to synchronize the local database with cloud DatabaseAM Publications
Since the cloud computing [1] platform is widely accepted by the industry, variety of applications are designed targeting to a cloud platform. Database as a Service (DaaS) is one of the powerful platform of cloud computing. There are many research issues in DaaS platform and one among them is the data synchronization issue. There are many approaches suggested in the literature to synchronise a local database by being in cloud environment. Unfortunately, very few work only available in the literature to synchronise a cloud database by being in the local database. The aim of this paper is to provide an algorithm to solve the problem of data synchronization from local database to cloud database.
GROUP BASED RESOURCE MANAGEMENT AND PRICING MODEL IN CLOUD COMPUTINGijcsit
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time.
AN OPEN JACKSON NETWORK MODEL FOR HETEROGENEOUS INFRASTRUCTURE AS A SERVICE O...IJCNCJournal
Cloud computing is an environment which provides services for user demand such as software, platform, infrastructure. Applications which are deployed on cloud computing have become more varied and complex to adapt to increase end-user quantity and fluctuating workload. One popular characteristic of
cloud computing is the heterogeneity of network, hosts and virtual machines (VM). There were many studies on cloud computing modeling based on queuing theory, but most studies have focused on homogeneity characteristic. In this study, we propose a cloud computing model based on open Jackson
network for multi-tier application systems which are deployed on heterogeneous VMs of IaaS cloud computing. The important metrics are analyzed in our experiments such as mean waiting time; mean request quantity, the throughput of the system. Besides that, metrics in model is used to modify number VMs
allocated for applications. Result of experiments shows that open queue network provides high efficiency.
A FRAMEWORK FOR SOFTWARE-AS-A-SERVICE SELECTION AND PROVISIONINGIJCNCJournal
As cloud computing is increasingly transforming the information technology landscape, organizations and
businesses are exhibiting strong interest in Software-as-a-Service (SaaS) offerings that can help them
increase business agility and reduce their operational costs. They increasingly demand services that can
meet their functional and non-functional requirements. Given the plethora and the variety of SaaS
offerings, we propose, in this paper, a framework for SaaS provisioning, which relies on brokered Service
Level agreements (SLAs), between service consumers and SaaS providers. The Cloud Service Broker (CSB)
helps service consumers find the right SaaS providers that can fulfil their functional and non-functional
requirements. The proposed selection algorithm ranks potential SaaS providers by matching their offerings
against the requirements of the service consumer using an aggregate utility function. Furthermore, the CSB
is in charge of conducting SLA negotiation with selected SaaS providers, on behalf of service consumers,
and performing SLA compliance monitoring
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Similar to Management of context aware software resources deployed in a cloud environment for improving quality of mobile cloud services (20)
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Online aptitude test management system project report.pdfKamal Acharya
The purpose of on-line aptitude test system is to take online test in an efficient manner and no time wasting for checking the paper. The main objective of on-line aptitude test system is to efficiently evaluate the candidate thoroughly through a fully automated system that not only saves lot of time but also gives fast results. For students they give papers according to their convenience and time and there is no need of using extra thing like paper, pen etc. This can be used in educational institutions as well as in corporate world. Can be used anywhere any time as it is a web based application (user Location doesn’t matter). No restriction that examiner has to be present when the candidate takes the test.
Every time when lecturers/professors need to conduct examinations they have to sit down think about the questions and then create a whole new set of questions for each and every exam. In some cases the professor may want to give an open book online exam that is the student can take the exam any time anywhere, but the student might have to answer the questions in a limited time period. The professor may want to change the sequence of questions for every student. The problem that a student has is whenever a date for the exam is declared the student has to take it and there is no way he can take it at some other time. This project will create an interface for the examiner to create and store questions in a repository. It will also create an interface for the student to take examinations at his convenience and the questions and/or exams may be timed. Thereby creating an application which can be used by examiners and examinee’s simultaneously.
Examination System is very useful for Teachers/Professors. As in the teaching profession, you are responsible for writing question papers. In the conventional method, you write the question paper on paper, keep question papers separate from answers and all this information you have to keep in a locker to avoid unauthorized access. Using the Examination System you can create a question paper and everything will be written to a single exam file in encrypted format. You can set the General and Administrator password to avoid unauthorized access to your question paper. Every time you start the examination, the program shuffles all the questions and selects them randomly from the database, which reduces the chances of memorizing the questions.
Online aptitude test management system project report.pdf
Management of context aware software resources deployed in a cloud environment for improving quality of mobile cloud services
1. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
MANAGEMENT OF CONTEXT-AWARE SOFTWARE
RESOURCES DEPLOYED IN A CLOUD ENVIRONMENT
FOR IMPROVING QUALITY OF MOBILE CLOUD
SERVICES
Sohame Mohammadi1, Kamran Zamanifar2 and Sayed Mehran Sharafi3
Department of Computer Engineering, Najaf Abad Branch, Islamic Azad University,
Isfahan, Iran
ABSTRACT
In cloud computing environments, context information is continuously created by context providers and
consumed by the applications on mobile devices. An important characteristic of cloud-based context aware
services is meeting the service level agreements (SLAs) to deliver a certain quality of service (Qos), such as
guarantees on response time or price. The response time to a request of context-aware software is affected
by loading extensive context data from multiple resources on the chosen server. Therefore, the speed of
such software would be decreased during execution time. Hence, proper scheduling of such services is
indispensable because the customers are faced with time constraints. In this research, a new scheduling
algorithm for context aware services is proposed which is based on classifying similar context consumers
and dynamically scoring the requests to improve the performance of the server hosting highly-requested
context-aware software while reducing costs of cloud provider. The approach is evaluated via simulation
and comparison with gi-FIFO scheduling algorithm. Experimental results demonstrate the efficiency of the
proposed approach.
KEYWORDS
Cloud Computing, Context-Aware Computing, Context Management, SLA.
1. INTRODUCTION
Cloud computing is a distributed and parallel computing system that builds on the convergence
and advancement of several technologies, especially in utility and grid computing, autonomic
computing, hardware virtualization, service-oriented architecture, web services and the existing
relationship among them [1]. According to the NIST’s definition [2], Cloud Computing
consisting of five essential features namely on-demand self-service, broad network access,
resource pooling, rapid elasticity, and measured service. All services and resources in this
technology are provided as different services while cloud computing architecture is classified
based on a three-layer structure comprised of Infrastructure as a service (IaaS), platform as a
service (PaaS) and software as a service (SaaS) [1, 2].
Due to the improvement of mobile applications and emergence of the concept of cloud computing
in the last decade, cloud computing was introduced as a potential technology for mobile services
including context-aware services. Although most computing devices (e.g. mobile phones, PDAs,
laptops, tablets, etc.) in the cloud computing environment are in small sizes and can accompany
the users in all places, they face with resource constraints (e.g. battery life, cpu speed, storage,
DOI:10.5121/ijdps.2014.5501 1
2. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
and bandwidth). Therefore, resource constraints prevent quality of mobile services. Hence, the
data processing and storage would be carried out outside the mobile device in the cloud while
mobile device operates as a displaying platform [3].
A context-aware service continually collects the context of the environment and adapts its
operation to the context data. When a request is sent by a customer to the context-aware software
in the cloud computing distributed environment, the response to the request of context-aware
software is affected by loading extensive context data from multiple resources on the chosen
server [4, 5]. Therefore the performance of such software would be decreased during execution.
Moreover, in the cloud computing environment, the users of cloud services are grouped into
various classes according to the costs they have paid. The services are managed properly provided
that those of higher qualities are dedicated to the users who have paid more. Meeting service level
agreement and decreasing managing costs of resource are important Challenges for Cloud
providers. If cloud providers either fail to meet the objectives set in the agreements or provide
services of lower qualities to customers than what agreed upon, they would be penalized by
claiming them the costs or through recharging their accounts [1, 5], which results in reduction of
the total profit earned by the cloud providers.
In this study, a new scheduling algorithm is suggested relying on classifying similar context
queries and dynamic scoring of requests to enhance the efficiency of servers hosting highly
requested context-aware software. The proposed approach is based on the idea that
communications costs during execution of an application are regarded as overhead, if especially
communications are established for loading context information from far resources. The costs of
communications could be compensated by making use of multithread applications through
running proper requests with regard to the context data type which results in the improvement of
the efficiency of provided services. Also what might happen is that many users are looking for
similar context information. In the other words, lots of requests in queue are related to getting
access and processing other similar context information. Therefore, conscious scheduling
considerably reduces the indicator of mean response time and the costs of cloud providers.
The rest of the paper is organized as follows. The related works are discussed in the next section.
Sections 3 and 4 are dedicated to the background concepts and Problem formulation. The
proposed algorithm is represented in section 5. Section 6 describes the experimental results and
the overall evaluations. Finally, the conclusion remarks and states future direction of this research
are identified in Section 7.
2. RELATED WORK
In recent years, many job scheduling methods have been proposed in the grid and other
distributed environments. In [7], a comprehensive review of the scheduling algorithms in the grid
environment and their comparisons are presented. In contrast, due to the unique features of the
cloud computing environment, its scheduling problems are different. On the other hand, a new
generation of applications known as context-aware applications has emerged and cloud
infrastructure has been introduced to be a suitable technology for context-aware applications.
Context-aware job scheduling, management and obtaining required context data are the main
challenges for the context-aware services. A notable number of context provisioning and
management approaches have been proposed. Surveys of which have been published in order to
understand the features of existing systems for instance in [8-11]. In the following, a brief review
on some of the previous studies related to the present study is discussed.
Boloor et al have investigated a research regarding the scheduling and dynamic allocating of
requests to manage the context-aware software in the cloud distributed environment [4-6]. In this
2
3. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
study, the cloud distributed environment is modeled as several data centers. An attempt is made to
reduce the overall penalties charged to the cloud through proposing Data-aware Session-grained
Allocation with gi-FIFO Scheduling (DSAgS), a novel decentralized request management scheme
deployed in each of the geographically distributed datacenters, context cache replacement policies
to be applied at the time of loading new context data to replace the prior contexts when the cache
is full and considering the service level agreements aiming at saving time[4-6].
S. L. Kiani et al in the article [12] have proposed a new mechanism of cache management in the
cloud environment to manage the context-aware software. In this mechanism, according to
different types of context information, validity period and access patterns, the cache is divided
into two sections in a scalable way and the efficiency of the whole system is improved by
employing appropriate scheduling policies for each section. Although the proposed idea could be
considered an effective solution to improve the system, but the centered design and usage of a
cache in the context broker, which is main coordinating component in this system, leading to the
creation of bottleneck. Therefore, the architecture lacks scalability and location transparency.
Assuncao et al in the article [13] have resorted to the cloud computing infrastructure to overcome
the resource constraints in the mobile environment. Combining the context awareness and
adaptive job scheduling technologies, the study has proposed a context-aware scheduler. The
main objective of this model is resources utilization in the cloud computing environments to
improve the quality of services. Y. Zhu et al in [14] have proposed a framework for the logical
stream that organizes services in the cloud computing environment in a way that makes them
capable of adapting to the environment’s contextual changes and using information and resources
of the cloud computing during execution. The model has been implemented on Java platform.
Making use of context information to adapt services to the environment’s changes, sharing
resources and improvement of the utilization of the cloud resources, portability, flexibility and
interoperability of the services could be considered its principal features.
Badidi & Esmahi in [15] did their attempts to optimize scalability, interoperability and quality of
services. Therefore, a multi-attributes decision algorithm and a framework for preparation of the
context information were proposed in a way that they were based on the implementation of the
context-aware services on the cloud side, utilization of the context broker as a mediator between
context consumers and context services and employment of publish/ subscribe model. The
context providers in the publish/subscribe model expose their data items to the context-aware
applications to be read. If an application wants to subscribe to a context, it sends its subscription
request to the broker. The subscriptions have to be adapted to the data items when the context
information is changed. Therefore, the broker decides to send an alarm to the applications
subscribed to this context. Then the applications could take delivery of the published context
values [16]. Although the proposed idea in this study is to attract attentions, but no discussion is
made regarding the ways brokers could be used as mediators and it has merely remained as an
idea.
3. BACKGROUND
Some of the concepts related to the present study are defined in this section.
3.1. Cloud-based context services
Dey and Abowd [17] define context as “any information that can be used to characterize the
situation of an entity. An entity is a person, place, or object that is considered relevant to the
interaction between a user and an application, including the user and application themselves”.
Context includes two categories [17]: primary contexts are location, time, activity and identity,
which are considered as the most important context information. Secondary contexts involving
3
4. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
any other information that can be retrieved from the primary contexts such as weather, email, etc.
different types of contextual information have varying temporal validity durations and remaining
valid for a particular duration.
Context-aware services continuously adapt their state behavior with the context changes. Context
services are located on the clouds side, and context consumers (CxCs) look up and invoke the
services [18]. In cloud environments, CxCs are context data regarded as inputs to adapt their
behavior to the users’ current situation (e.g. context aware services). Context provider (CxP) is an
entity which provides context information. An CxP gathers raw data from context sources(e.g.,
sensors, networks, mobile phones, web services, etc), and convert them to meaningful
information, aggregation, modeling, reasoning and uploaded to the context broker(CxB).The CxB
acts as an intermediary to control context flow between CxPs and CxCs. Its main function is to
query resolution, event management, registration of available CxPs, provide routing and lookup
services [19, 20].
There are two cases of utilizing context information in cloud environment [18]: The first context
service is used to guide and provide improved service to the user. The second, context is used to
find fault and maintain the system stable and adapt their operations to change environmental
conditions.
3.2. Service level agreement
Cloud computing is a parallel and distributed computing systems that enable the deployment and
execution of applications and services on remote data centers with a pay-per-use business model
based on SLA over the internet. The SLA established through negotiation between the cloud
provider and the cloud consumer to ensure service quality. Negotiation strategy between the
cloud provider and consumer can investigate through the third party called broker. The SLA may
involve a variety of metrics to measure the quality of service requirements such as availability,
execution time, price, reliability, cpu usage and the storage used by the consumer[1,21].
One of the most applicable types of SLA is based on percentage which service Provider tries to
allocate resources to the customers in accordance with the agreed percentile during certain
periods of time. While in case of any failure regarding meeting the agreed level, they have to pay
fine [1].
The probabilistic SLA has the following parameters [6, 22]: Response time(R), which is duration
of the time it takes from when a request arrives at the cloud and leaves it. Response time
threshold (RT), the value of RT is the deadline of a request which must leave the cloud. Actual
conforming percentile: the percentile of requests of a particular user class which have met the
required response time. This value is controlled by the request management policy chosen by the
cloud provider. Desired conforming percentile: For different classes of users, this value is
determined through the negotiation between the cloud provider and the consumer. A penalty (P),
clause can be applied to the cloud provider, if the quality of service requirements is less than a
given value.
There are various penalty functions to calculate the cloud service providers’ penalties including
linear, exponential and stepwise functions. The main focus in this study takes stepwise penalty
function into accounts while bases requests on deadline to meet the satisfaction of the consumers.
4. STATEMENT OF THE PROBLEM
In recent years, increased use of mobile applications (e.g. context aware applications) and
emerging of cloud computing concept, cloud computing infrastructure has been introduced as a
potential technology for mobile-based environments in order to cope with resource constraints for
4
5. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
mobile users[3,13]. There are numerous servers in a distributed cloud environments, each one
hosting one or more context aware applications to provide services for multiple classes of CxCs.
When a request for a context aware service arrives at the datacenter through the internet, the
scheduling agent submits it to a suitable server. The job has to be scheduled by internal scheduler
at the end-server to execute it on resource and to send the answer back to the scheduling agent.
Context aware services utilize context to adapt themselves to their changing environment. The
response time of a request for a context aware application involves context provisioning, context
provider lookup times and time required to load context information on the chosen end server
[4,5,12]. Therefore, the speed of context-aware software would be reduced during execution
times in the cloud computing environment. Moreover, users of cloud services are grouped into
various classes according to the costs they have paid. Proper management of these services is met
provided that services of higher qualities are dedicated to those who have paid more [1, 5].
To solve the above mentioned problem, a study was carried out by Boloor et al regarding
scheduling and dynamic allocating of requests to manage context-aware software in the
distributed cloud environment [4- 6]. In this study, they propose Data-aware Session-grained
Allocation with gi-FIFO Scheduling (DSAgS) in a distributed cloud that is modeled as several
datacenters. The proposed scheduling scheme is based on the gi-FIFO scheduling algorithm
which has been mathematically proven in [23] to be the most appropriate for probabilistic SLAs
for a single server serving multiple class jobs. The scheduling policy they use to schedule
requests queued at each cloud server for the cloud provider hosting the context aware applications
and it could be described as follows: Firstly, choose the request class with the highest penalty;
then, amongst all the queued requests of the chosen class, choose one with maximum waiting
time resulting in a response time less than or equal to RT. If there is not such request, it will be
chosen the request with higher waiting period, consequently resulting in a response time greater
RT [4- 6].
Despite the fact that the proposed method in the present study could be considered a proper
solution regarding managing requests of the server hosting context-aware software in the cloud
environment but given the fact that the cloud servers are capable of responding to limited number
of requests at a specific moment and they should process and respond to high volumes of requests
based deadlines at that specific moment, so the number of customers losing deadline at that
operation would be increased while the efficiency of such highly-requested servers would be
drastically decreased and consequently cloud providers penalties could be increased. Therefore,
the present research aims at proposing a proper scheduling algorithm for the requests of context-aware
5
software in cloud computing environment with regard to the represented challenges.
5. THE PROPOSED CONTEXT-AWARE ALGORITHM
The decision making process of the scheduling algorithm to choose the next request to be
executed among requests queued at the end server is affected by classes’ penalties, deadline of
requests, context data storage type and differing patterns of updating context information in
dynamic and evolutionary environments. According to this problem, the context-aware
scheduling algorithm is engaged in multiple dimensions and complications related to finding a
way to estimate indefinite scores in a multidimensional space. Hence, a new approach of
scheduling is proposed for management of context-aware software requests.
6. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
6
Figure 1. Overall Architecture of the cloud based context provisioning system
The internal architecture of the context provisioning system containing some context providers on
a cloud infrastructure is depicted in figure 1. In a cloud computing environment, the context
customer including all computing devices of customers (e.g. mobile phone, PDAs, laptops and
other computational devices of cloud environment) send their requests for a particular context
type through the Internet[5, 6]. Users' requests are allocated to the cloud end servers based on
context-aware and dynamic scoring [5, 6]. Cloud servers might host one or more various context
providers, which each of them requiring for numbers of users’ requests. There are numerous
requests which call for similar context data types from such servers. The server hosting context-aware
software has to respond to lots of requests at any moment. Under these circumstances,
given the fact that the server is capable of responding to limited number of requests at a moment
and providing required context data for requests takes time, therefore the efficiency of the server
would be considerably reduced.
The problem is solved through proposing a solution which is based on classifying and dynamic
scoring of requests with regard to their user classes and the required context scope. Scope is a set
of closely related context parameters and context exchange unit that always requested, updated,
provided and stored at the same time [20, 24]. In this approach, each group of requests requiring
similar context types is considered a cluster in related scope while penalties for each cluster of
scope will be determined with regard to the number of requests related to the needed context data
and requests’ user classes.
7. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
7
Figure 2. The pseudo code of the proposed algorithm
Aiming at restricting the multidimensional space of request scoring, the gi-FIFO scheduling
algorithm is run from the beginning of running the proposed algorithm to estimate classes’
penalties and requests’ response times. Nevertheless, given that loading context data takes time, if
the required context information for running a request is not valid on a server at the time of
choosing the next request to be operated, it is indispensable that the required context data be
loaded on the server. During loading context data in order to compensate the communications
costs by computations, the scope should be chosen with the shortest expiry time on the server.
Regarding the updating pattern of the chosen scope, there are two main strategies behind the
proposed scheduling policy namely select the soonest expiring first (SE) for scope with short
update pattern and select the highest penalty (HP) score among the scopes that are valid on the
server for the scope with the long update pattern. Therefore, if the conditions are met with regard
to the loading time of context data, the requests related to the chosen scope would be
implemented based on gi-FIFO scheduling. figure 2 shows the pseudo-codes of the proposed
algorithm.
6. IMPLEMENTING AND EVALUATING
The proposed approach is simulated and evaluated by JAVA language programming. The
conformance levels, average response time and cloud providers’ penalties for executing requests
are the parameters analyzed. As for simulation, the simulation parameters of articles [5, 12] are
used. The input parameters for simulation are considered as follows: 1) 5 user classes, 2) 100
distinct sessions-id to assign the customer’s class, 3) customers are entered the system through
Poisson distribution with constant rate λ which is the number of customers at a time unit, 4) the
service processes are fixed 5) twelve different scopes are used in this simulation and the
parameters for each scope include: scope ID, the mean request processing time at context
provider, validity period of produced context data and types of scope updating patterns. The
8. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
values of scopes using from table 1, 6) the time of loading context data is considered three times
more than the average service rate for each request.
8
Table 1. Simulation Parameters [12]
CxP:ScopeID Processing time[ms] Validity[s] Category
CxP:1 70 60 Short
CxP:2 70 60 Short
CxP:3 80 80 Short
CxP:4 80 80 Short
CxP:5 90 180 Short
CxP:6 90 240 Short
CxP:7 70 360 Long
CxP:8 70 400 Long
CxP:9 80 600 Long
CxP:10 80 900 Long
CxP:11 90 1200 Long
CxP:12 90 1200 Long
After determining the simulation values, the users’ requests as the consumers of the context data
would be produced and the gi-FIFO scheduling algorithm and the proposed approach would be
implemented. Figure 3,4 show the plots of queues’ conformance levels vs. the number of user
requests. As the results show, regarding the gi-fifo scheduling policy, the growing rate of the
conformance levels decreases as the number of user requests increases and after it roughly
reaches zero. Thus experimental results reveal that the conformance levels obtained from the
proposed algorithm are higher than gi-FIFO scheduling algorithm for all user levels. As the
experimental results show, the proposed algorithm enjoys the best state at scheduling queue
requests relevant to the user class with the highest penalty while the worst state is at scheduling
those of the user class with the lowest penalty.
Figure 3.Comparison of conformance levels for the user class with the highest penalty
9. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
9
Figure 4. Comparison of conformance levels for the user class with the lowest penalty
Figure 5 shows the mean response time against the users’ requests. Given that a server is capable
of responding to a finite number of requests at a moment and it takes time to provide services
with the required context data, so according to the gi-FIFO scheduling policy, the percentile of
customers losing their deadlines at that operation would increase. Therefore, there would be a
significant increase in the mean response time. Nevertheless, In the proposed approach given that
requests are classified based on the required context information, they are scored dynamically and
the loading time of the context data is overlapped with the implementation of the requests, the
mean respond time is significantly reduced while increasing the total profit charged by the cloud
providers.
Figure 5. Comparison of average response time for proposed and gi-fifo schedules
For different classes of users are considered distinct penalty functions. The penalty functions for
multi-class jobs define the way penalty could be calculated. Our method for calculating the
penalties is based on the way presented by boloor [6] and is defined as follows[6]: If the effect of
delaying the recently arrived request causes the non-conformance to increase beyond that given in
the SLA, then the cloud provider is assigned a penalty of P$, else if the the non-conformance of
request is low, no penalty would be added to the total penalty. The value of current non-conformance
is calculated in below equation:
non-conformance = ((1 - cck) * Xk + 1)/( Xk + 1) (1)
10. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
Where Xk is total number of requests serviced of class k from the start of the observation interval
as measured by server d and cck is current conformance of class k in cloud as calculated by d [6].
Figure 6 shows the penalty of 4500 consumer queries incurred by the cloud provider with
different context data update pattern. The proposed policy is evaluated with different management
strategies regarding short validity (SV) scopes and long validity (LV) scopes in increment of 25%
[12]. Typical result in the figure shows that gi-FIFO policy results in the maximum penalties
charged. As for the approach with the shortest expiry time for scheduling as the duration of the
context load time, the lowest penalty is assigned when the scope distribution in the context-data
queries is biased towards SV scopes. The highest penalty value policy decreases the total penalty
charged to the cloud across all context queries. The above-mentioned strategy, however, is better
applied to LV context-data queries than SV context-data queries. The proposed scheduling makes
use of two different policies that are suitable for both short and long validity scopes distributed
context queries.
10
Figure 6: Comparison of total penalty incurred in gi-fifo and proposed schedules
7. CONCLUSIONS AND FUTURE WORK
In the cloud computing environment, since loading context data takes time, servers hosting
context-aware software are faced with the problem of low speed of providing services. Therefore,
the profit obtained decreases rapidly for cloud providers. In this research, a new scheduling
algorithm for context aware services is proposed which is based on dynamic scoring of the
requests to improve Qos. The proposed algorithm runs based on classifying requests in
accordance with the type of the required context data, dynamic scoring and overlapping loading
time of the context data with operating proper requests with regard to the storage type of the
context data and their differing updating patterns. Experimental results demonstrate the efficiency
of the proposed approach while highlight the necessity of managing the requests of the context-aware
software in the cloud environment.
11. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
In the future, we will investigate on modeling the probabilistic SLA globally in a distributed
cloud computing environment with numerous servers to stimulate developments in this
infrastructure.
REFERENCES
[1] R.Buyya, J.Broberg, and A.M.Goscinski, "Cloud Computing Principles and Paradigms," Wiley
11
Publishing, USA, 2011.
[2] P.Mell and T.Grance, "The NIST Definition of Cloud Computing", National Institute of Standards
and Technology, Information Technology Laboratory, Technical Report Version 15, 2011.
[3] L.Guan, X.Ke, M.Song, and J.Song, "A Survey of Research on Mobile Cloud Computing," in
Computer and Information Science (ICIS), 2011 IEEE/ACIS 10th International Conference on, pp.
387-392, 2011.
[4] K.Boloor, R.Chirkova, Y.Viniotis, and T.Salo, "Dynamic Request Allocation and Scheduling for
Context Aware Applications Subject to a Percentile Response Time SLA in a Distributed Cloud," in
Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference
on, pp. 464-472, 2010.
[5] K.Boloor, R.Chirkova, T.Salo, and Y.Viniotis, "Management of SOA-Based Context-Aware
Applications Hosted in a Distributed Cloud Subject to Percentile Constraints," in Services Computing
(SCC), 2011 IEEE International Conference on, pp. 88-95, 2011.
[6] K.Boloor, "Management of soa-based, data-intensive applications deployed in a distributed cloud
subject to response time percentile service level agreements", Phd Thesis, North Carolina State
University, Raleigh, NC, USA, 2012.
[7] D.Amalarethinam, P.Muthulakshmi, "An Overview of the Scheduling Policies and Algorithms in Grid
Computing", International Journal of Research & Reviews in Computer Science, vol. 2, no. 2, pp.
280, 2011.
[8] M.Baldauf, S.Dustdar, and F. Rosenberg, "A survey on context-aware systems", International Journal
of Ad Hoc and Ubiquitous Computing, vol. 2, no. 4, pp. 263–277, 2007.
[9] H.Truong and S.Dustdar, "A survey on context-aware web service systems", International Journal of
Web Information Systems, vol. 5, no. 1, pp. 5–31, 2009.
[10] J.Hong, E.Suh, and S.Kim, "Context-aware systems: A literature review and classification", Expert
Systems with Applications, vol. 36, no. 4, pp. 8509–8522, 2009.
[11] M.Knappmeyer, S.L.Kiani, E.S.Reetz, N.Baker, and R.Tonjes, "Survey of Context Provisioning
Middleware," Communications Surveys & Tutorials, IEEE, vol. 15, pp. 1492-1519, 2013.
[12] S.L.Kiani, A.Anjum, K.Munir, R.McClatchey, and N. Antonopoulos, "Context Caches in the Clouds,"
Journal of Cloud Computing: Advances Systems and Applications, 2012.
[13] M.D.Assuncao, M.A.S.Netto, F.Koch, and S. Bianchi, "Context-Aware Job Scheduling for Cloud
Computing Environments," in Utility and Cloud Computing (UCC), 2012 IEEE Fifth International
Conference on, pp. 255-262, 2012.
[14] Y.Zhu, R.Y.Shtykh, and Q.Jin, "A Human-Centric Framework for Context-Aware Flowable Services
in Cloud Computing Environments," Journal of Information Sciences, 2012.
[15] E.Badidi, L.Esmahi, "A Cloud-Based Approach for Context Information Provisioning", Journal of
World of Computer Science and Information Technology Journal (WCSIT, pp. 63-70), 2011.
[16] E.Badidi, "A Publish/Subscribe Model for QoS-Aware Service Provisioning and Selection", Journal
of Computer Applications, vol. 26, 2011.
[17] G.Abowd, A.Dey, P. Brown, N. Davies, M.Smith, and P. Steggles, "Towards a Better Understanding
of Context and Context-Awareness", in Handheld and Ubiquitous Computing. vol. 1707, H.-W.
Gellersen, Ed., ed: Springer Berlin Heidelberg, 1999, pp. 304-307.
[18] H.Jung, S.Dong, "A Conceptual Framework for Provisioning Context-aware Mobile Cloud Services",
Cloud Computing (CLOUD), 2010 IEEE 3rd International Conference on, 2010.
[19] H.Vahdat-Nejad, K.Zamanifar and N.Nematbakhsh, "Towards a Better Understanding of Context
Aware Middleware: Survey and State of the Art", To be published, 2013.
[20] S.L. Kiani, A.Anjum, M. Knappmeyer, N.Bessis, and N. Antonopoulos, "Federated broker system for
pervasive context provisioning", Journal of Systems and Software, vol. 86, pp. 1107-1123, 2013.
[21] R.Rajavel and T.Mala, "Achieving Service Level Agreement in Cloud Environment Using Job
Prioritization in Hierarchical Scheduling", in Proceedings of the International Conference on
Information Systems Design and Intelligent Applications 2012 (INDIA 2012) held in Visakhapatnam,
12. International Journal of Distributed and Parallel Systems (IJDPS) Vol.5, No.5, September 2014
India, January 2012. vol. 132, S. Satapathy, P. S. Avadhani, and A. Abraham, Eds., ed: Springer
Berlin Heidelberg, pp. 547-554, 2012.
[22] K.Boloor, R.Chirkova, T. Salo, and Y.Viniotis, "Analysis of Response Time Percentile Service Level
Agreements in SOA-Based Applications", in Global Telecommunications Conference (GLOBECOM
2011), 2011 IEEE, pp. 1-6, 2011.
[23] N.Agarwal and I.Viniotis, "Performance Space of a GI/G/1 Queueing System Under a Percentile Goal
Criterion," in Performance Modelling and Evaluation of ATM Networks, D. Kouvatsos, Ed., ed:
Springer US, pp. 474-484, 1995.
[24] M.Knappmeyer, S.L.Kiani, C.Fra, B.Moltchanov, and N.Baker, "ContextML: A light-weight context
representation and context management schema", in Wireless Pervasive Computing (ISWPC), 2010
5th IEEE International Symposium on, 2010, pp. 367-372.
12