This document summarizes a research paper that proposes an algorithm to reduce data distribution and processing time in cloud computing for big data deployment. The paper discusses different data distribution techniques for virtual machines (VMs) in cloud computing, such as centralized, semi-centralized, hierarchical, and peer-to-peer approaches. It also reviews related work on MapReduce frameworks and load balancing algorithms. The authors implemented their proposed peer-to-peer distribution technique and Round Robin and Throttled load balancing algorithms in CloudSim. Experimental results showed the Throttled algorithm achieved significantly lower average response times than Round Robin.
Public Cloud Partition Using Load Status Evaluation and Cloud Division RulesIJSRD
with growth of cloud computing load balancing is important impact on performance. Cloud computing efficiency depends on good load balancer. Many type of situation occur that time cloud partitioning is done by load balancer. Different type of situation needed different type of strategies for public cloud portioning using load balancer.in this paper we work on, partition of public cloud using two type of situation first is load status evaluation and second is cloud division rules. Load status evaluation is measure in number of cloudlets arrives at datacenter and cloud divisions rules are based on cloudlet come from which geographical location. On the basis of geographical location we partition public cloud and improve performance of load balancing in cloud computing. We implement proposed system with help of cloudsim3.0 simulator.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY ijccsa
Cloud Computing (CC) is a model for enabling on-demand access to a shared pool of configurable computing resources. Testing and evaluating the performance of the cloud environment for allocating, provisioning, scheduling, and data allocation policy have great attention to be achieved. Therefore, using cloud simulator would save time and money, and provide a flexible environment to evaluate new research
work. Unfortunately, the current simulators (e.g., CloudSim, NetworkCloudSim, GreenCloud, etc..) deal with the data as for size only without any consideration about the data allocation policy and locality. On the other hand, the NetworkCloudSim simulator is considered one of the most common used simulators because it includes different modules which support needed functions to a simulated cloud environment,
and it could be extended to include new extra modules. According to work in this paper, the NetworkCloudSim simulator has been extended and modified to support data locality. The modified simulator is called LocalitySim. The accuracy of the proposed LocalitySim simulator has been proved by building a mathematical model. Also, the proposed simulator has been used to test the performance of the
three-tire data center as a case study with considering the data locality feature
Many real-time systems are naturally distributed and these distributed systems require not only highavailability
but also timely execution of transactions. Consequently, eventual consistency, a weaker type of
strong consistency is an attractive choice for a consistency level. Unfortunately, standard eventual
consistency, does not contain any real-time considerations. In this paper we have extended eventual
consistency with real-time constraints and this we call real-time eventual consistency. Followed by this new
definition we have proposed a method that follows this new definition. We present a new algorithm using
revision diagrams and fork-join data in a real-time distributed environment and we show that the proposed
method solves the problem.
Public Cloud Partition Using Load Status Evaluation and Cloud Division RulesIJSRD
with growth of cloud computing load balancing is important impact on performance. Cloud computing efficiency depends on good load balancer. Many type of situation occur that time cloud partitioning is done by load balancer. Different type of situation needed different type of strategies for public cloud portioning using load balancer.in this paper we work on, partition of public cloud using two type of situation first is load status evaluation and second is cloud division rules. Load status evaluation is measure in number of cloudlets arrives at datacenter and cloud divisions rules are based on cloudlet come from which geographical location. On the basis of geographical location we partition public cloud and improve performance of load balancing in cloud computing. We implement proposed system with help of cloudsim3.0 simulator.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
LOCALITY SIM: CLOUD SIMULATOR WITH DATA LOCALITY ijccsa
Cloud Computing (CC) is a model for enabling on-demand access to a shared pool of configurable computing resources. Testing and evaluating the performance of the cloud environment for allocating, provisioning, scheduling, and data allocation policy have great attention to be achieved. Therefore, using cloud simulator would save time and money, and provide a flexible environment to evaluate new research
work. Unfortunately, the current simulators (e.g., CloudSim, NetworkCloudSim, GreenCloud, etc..) deal with the data as for size only without any consideration about the data allocation policy and locality. On the other hand, the NetworkCloudSim simulator is considered one of the most common used simulators because it includes different modules which support needed functions to a simulated cloud environment,
and it could be extended to include new extra modules. According to work in this paper, the NetworkCloudSim simulator has been extended and modified to support data locality. The modified simulator is called LocalitySim. The accuracy of the proposed LocalitySim simulator has been proved by building a mathematical model. Also, the proposed simulator has been used to test the performance of the
three-tire data center as a case study with considering the data locality feature
Many real-time systems are naturally distributed and these distributed systems require not only highavailability
but also timely execution of transactions. Consequently, eventual consistency, a weaker type of
strong consistency is an attractive choice for a consistency level. Unfortunately, standard eventual
consistency, does not contain any real-time considerations. In this paper we have extended eventual
consistency with real-time constraints and this we call real-time eventual consistency. Followed by this new
definition we have proposed a method that follows this new definition. We present a new algorithm using
revision diagrams and fork-join data in a real-time distributed environment and we show that the proposed
method solves the problem.
Survey on Division and Replication of Data in Cloud for Optimal Performance a...IJSRD
Outsourcing information to an outsider authoritative control, as is done in distributed computing, offers ascend to security concerns. The information trade off may happen because of assaults by different clients and hubs inside of the cloud. Hence, high efforts to establish safety are required to secure information inside of the cloud. On the other hand, the utilized security procedure should likewise consider the advancement of the information recovery time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that all in all methodologies the security and execution issues. In the DROPS procedure, we partition a record into sections, and reproduce the divided information over the cloud hubs. Each of the hubs stores just a itary part of a specific information record that guarantees that even in the event of a fruitful assault, no important data is uncovered to the assailant. Additionally, the hubs putting away the sections are isolated with certain separation by method for diagram T-shading to restrict an assailant of speculating the areas of the sections. Moreover, the DROPS procedure does not depend on the customary cryptographic procedures for the information security; in this way alleviating the arrangement of computationally costly approaches. We demonstrate that the likelihood to find and bargain the greater part of the hubs putting away the sections of a solitary record is to a great degree low. We likewise analyze the execution of the DROPS system with ten different plans. The more elevated amount of security with slight execution overhead was watched.
A quality of service management in distributed feedback control scheduling ar...csandit
In our days, there are many real-time applications that use data which are geographically
dispersed. The distributed real-time database management systems (DRTDBMS) have been used
to manage large amount of distributed data while meeting the stringent temporal requirements
in real-time applications. Providing Quality of Service (QoS) guarantees in DRTDBMSs is a
challenging task. To address this problem, different research works are often based on
distributed feedback control real-time scheduling architecture (DFCSA). Data replication is an
efficient method to help DRTDBMS meeting the stringent temporal requirements of real-time
applications. In literature, many works have designed algorithms that provide QoS guarantees
in distributed real-time databases using only full temporal data replication policy.
In this paper, we have applied two data replication policies in a distributed feedback control
scheduling architecture to manage a QoS performance for DRTDBMS. The first proposed data
replication policy is called semi-total replication, and the second is called partial replication
policy.
Data Warehouses store integrated and consistent data in a subject-oriented data repository dedicated
especially to support business intelligence processes. However, keeping these repositories updated usually
involves complex and time-consuming processes, commonly denominated as Extract-Transform-Load tasks.
These data intensive tasks normally execute in a limited time window and their computational requirements
tend to grow in time as more data is dealt with. Therefore, we believe that a grid environment could suit
rather well as support for the backbone of the technical infrastructure with the clear financial advantage of
using already acquired desktop computers normally present in the organization. This article proposes a
different approach to deal with the distribution of ETL processes in a grid environment, taking into account
not only the processing performance of its nodes but also the existing bandwidth to estimate the grid
availability in a near future and therefore optimize workflow distribution.
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...ijccsa
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...neirew J
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
Dynamic Resource Provisioning with Authentication in Distributed DatabaseEditor IJCATR
Data center have the largest consumption amounts of energy in sharing the power. The public cloud workloads of different
priorities and performance requirements of various applications [4]. Cloud data center have capable of sensing an opportunity to present
different programs. In my proposed construction and the name of the security level of imperturbable privacy leakage rarely distributed
cloud system to deal with the persistent characteristics there is a substantial increases and information that can be used to augment the
profit, retrenchment overhead or both. Data Mining Analysis of data from different perspectives and summarizing it into useful
information is a process. Three empirical algorithms have been proposed assignments estimate the ratios are dissected theoretically and
compared using real Internet latency data recital of testing methods
Cloud computing is a new computing paradigm that, just as electricity was firstly generated at home and
evolved to be supplied from a few utility providers, aims to transform computing into a utility. It is a mapping
strategy that efficiently equilibrates the task load into multiple computational resources in the network based on the
system status to improve performance. The objective of this research paper is to show the results of Hybrid DEGA,
in which GA is implemented after DE
Evolution of Distributed computing: Scalable computing over the Internet – Technologies for network based systems – clusters of cooperative computers - Grid computing Infrastructures – cloud computing - service oriented architecture – Introduction to Grid Architecture and standards – Elements of Grid – Overview of Grid Architecture.
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computi...IJMER
Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
Survey on Division and Replication of Data in Cloud for Optimal Performance a...IJSRD
Outsourcing information to an outsider authoritative control, as is done in distributed computing, offers ascend to security concerns. The information trade off may happen because of assaults by different clients and hubs inside of the cloud. Hence, high efforts to establish safety are required to secure information inside of the cloud. On the other hand, the utilized security procedure should likewise consider the advancement of the information recovery time. In this paper, we propose Division and Replication of Data in the Cloud for Optimal Performance and Security (DROPS) that all in all methodologies the security and execution issues. In the DROPS procedure, we partition a record into sections, and reproduce the divided information over the cloud hubs. Each of the hubs stores just a itary part of a specific information record that guarantees that even in the event of a fruitful assault, no important data is uncovered to the assailant. Additionally, the hubs putting away the sections are isolated with certain separation by method for diagram T-shading to restrict an assailant of speculating the areas of the sections. Moreover, the DROPS procedure does not depend on the customary cryptographic procedures for the information security; in this way alleviating the arrangement of computationally costly approaches. We demonstrate that the likelihood to find and bargain the greater part of the hubs putting away the sections of a solitary record is to a great degree low. We likewise analyze the execution of the DROPS system with ten different plans. The more elevated amount of security with slight execution overhead was watched.
A quality of service management in distributed feedback control scheduling ar...csandit
In our days, there are many real-time applications that use data which are geographically
dispersed. The distributed real-time database management systems (DRTDBMS) have been used
to manage large amount of distributed data while meeting the stringent temporal requirements
in real-time applications. Providing Quality of Service (QoS) guarantees in DRTDBMSs is a
challenging task. To address this problem, different research works are often based on
distributed feedback control real-time scheduling architecture (DFCSA). Data replication is an
efficient method to help DRTDBMS meeting the stringent temporal requirements of real-time
applications. In literature, many works have designed algorithms that provide QoS guarantees
in distributed real-time databases using only full temporal data replication policy.
In this paper, we have applied two data replication policies in a distributed feedback control
scheduling architecture to manage a QoS performance for DRTDBMS. The first proposed data
replication policy is called semi-total replication, and the second is called partial replication
policy.
Data Warehouses store integrated and consistent data in a subject-oriented data repository dedicated
especially to support business intelligence processes. However, keeping these repositories updated usually
involves complex and time-consuming processes, commonly denominated as Extract-Transform-Load tasks.
These data intensive tasks normally execute in a limited time window and their computational requirements
tend to grow in time as more data is dealt with. Therefore, we believe that a grid environment could suit
rather well as support for the backbone of the technical infrastructure with the clear financial advantage of
using already acquired desktop computers normally present in the organization. This article proposes a
different approach to deal with the distribution of ETL processes in a grid environment, taking into account
not only the processing performance of its nodes but also the existing bandwidth to estimate the grid
availability in a near future and therefore optimize workflow distribution.
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...ijccsa
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...neirew J
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
Dynamic Resource Provisioning with Authentication in Distributed DatabaseEditor IJCATR
Data center have the largest consumption amounts of energy in sharing the power. The public cloud workloads of different
priorities and performance requirements of various applications [4]. Cloud data center have capable of sensing an opportunity to present
different programs. In my proposed construction and the name of the security level of imperturbable privacy leakage rarely distributed
cloud system to deal with the persistent characteristics there is a substantial increases and information that can be used to augment the
profit, retrenchment overhead or both. Data Mining Analysis of data from different perspectives and summarizing it into useful
information is a process. Three empirical algorithms have been proposed assignments estimate the ratios are dissected theoretically and
compared using real Internet latency data recital of testing methods
Cloud computing is a new computing paradigm that, just as electricity was firstly generated at home and
evolved to be supplied from a few utility providers, aims to transform computing into a utility. It is a mapping
strategy that efficiently equilibrates the task load into multiple computational resources in the network based on the
system status to improve performance. The objective of this research paper is to show the results of Hybrid DEGA,
in which GA is implemented after DE
Evolution of Distributed computing: Scalable computing over the Internet – Technologies for network based systems – clusters of cooperative computers - Grid computing Infrastructures – cloud computing - service oriented architecture – Introduction to Grid Architecture and standards – Elements of Grid – Overview of Grid Architecture.
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computi...IJMER
Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
Locality Sim : Cloud Simulator with Data Localityneirew J
Cloud Computing (CC) is a model for enabling on-demand access to a shared pool of configurable
computing resources. Testing and evaluating the performance of the cloud environment for allocating,
provisioning, scheduling, and data allocation policy have great attention to be achieved. Therefore, using
cloud simulator would save time and money, and provide a flexible environment to evaluate new research
work. Unfortunately, the current simulators (e.g., CloudSim, NetworkCloudSim, GreenCloud, etc..) deal
with the data as for size only without any consideration about the data allocation policy and locality. On
the other hand, the NetworkCloudSim simulator is considered one of the most common used simulators
because it includes different modules which support needed functions to a simulated cloud environment,
and it could be extended to include new extra modules. According to work in this paper, the
NetworkCloudSim simulator has been extended and modified to support data locality. The modified
simulator is called LocalitySim. The accuracy of the proposed LocalitySim simulator has been proved by
building a mathematical model. Also, the proposed simulator has been used to test the performance of the
three-tire data center as a case study with considering the data locality feature.
Virtual Machine Migration and Allocation in Cloud Computing: A Reviewijtsrd
Cloud computing is an emerging computing technology that maintains computational resources on large data centers and accessed through internet, rather than on local computers. VM migration provides the capability to balance the load, system maintenance, etc. Virtualization technology gives power to cloud computing. The virtual machine migration techniques can be divided into two categories that is pre copy and post copy approach. The process to move running applications or VMs from one physical machine to another is known as VM migration. In migration process the processor state, storage, memory and network connection are moved from one host to another.. Two important performance metrics are downtime and total migration time that the users care about most, because these metrics deals with service degradation and the time during which the service is unavailable. This paper focus on the analysis of live VM migration Techniques in cloud computing. Khushbu Singh Chandel | Dr. Avinash Sharma "Virtual Machine Migration and Allocation in Cloud Computing: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29556.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-network/29556/virtual-machine-migration-and-allocation-in-cloud-computing-a-review/khushbu-singh-chandel
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
Different types of clients over the globe uses Cloud services because cloud computing involves various features and advantages such as building cost effectives solutions for business, scale resources up and down according to the current demand and many more. But from the cloud provider point of view, there are many challenges that need to be faced in order to ensure a hassle free service delivery to the clients. One such problem is to maintain high availability of services. This project aims at presenting a high available HA solution for business continuity and disaster recovery through configuration of various other services such as load balancing, elasticity and replication. Miss Pratiksha Bhagawati | Mrs. Priya N "A Study on Replication and Failover Cluster to Maximize System Uptime" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd41249.pdf Paper URL: https://www.ijtsrd.comcomputer-science/other/41249/a-study-on-replication-and-failover-cluster-to-maximize-system-uptime/miss-pratiksha-bhagawati
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure – coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure – coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
A load balancing strategy for reducing data loss risk on cloud using remodif...IJECEIAES
Cloud computing always deals with new problems to fulfill the demand of the challenging organizations around the whole world. Reducing response time without the risk of data loss is a very critical issue for the user requests on cloud computing. Load balancing ensures quick response of virtual machine (VM), proper usage of VMs, throughput, and minimal cost of VMs. This paper introduces a re-modified throttled algorithm (RTMA) that reduces the risk of data hampering and data loss considering the availability of VM which increases system’s performance. Response time of virtual machines have been considered in our work, so that when migration process is running, data will not be overflowed in the VMs. Thus, the data migration process becomes high and reliable. We have completed the overall simulation of our proposed algorithm on the cloud analyst tool and successfully reduced the risk of data loss as well as maintains the response time.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
An Algorithm to synchronize the local database with cloud DatabaseAM Publications
Since the cloud computing [1] platform is widely accepted by the industry, variety of applications are designed targeting to a cloud platform. Database as a Service (DaaS) is one of the powerful platform of cloud computing. There are many research issues in DaaS platform and one among them is the data synchronization issue. There are many approaches suggested in the literature to synchronise a local database by being in cloud environment. Unfortunately, very few work only available in the literature to synchronise a cloud database by being in the local database. The aim of this paper is to provide an algorithm to solve the problem of data synchronization from local database to cloud database.
Adaptive Offloading in Mobile Cloud Computing by automatic partitioning approach of tasks is the idea to augment execution through migrating heavy computation from mobile devices to resourceful cloud servers and then receive the results from them via wireless networks. Offloading is an effective way to
overcome the resources and functionalities constraints
of the mobile devices since it can release them from
intensive processing and increase performance of the
mobile applications, in terms of response time.
Offloading brings many potential benefits, such as
energy saving, performance improvement, reliability
improvement, ease for the software developers and
better exploitation of contextual information.
Parameters about method transitions, response times,
cost and energy consumptions are dynamically reestimated
at runtime during application executions.
Data Partitioning in Mongo DB with CloudIJAAS Team
Cloud computing offers various and useful services like IAAS, PAAS SAAS for deploying the applications at low cost. Making it available anytime anywhere with the expectation to be it scalable and consistent. One of the technique to improve the scalability is Data partitioning. The alive techniques which are used are not that capable to track the data access pattern. This paper implements the scalable workload-driven technique for polishing the scalability of web applications. The experiments are carried out over cloud using NoSQL data store MongoDB to scale out. This approach offers low response time, high throughput and less number of distributed transaction. The result of partitioning technique is conducted and evaluated using TPC-C benchmark.
Similar to Data Distribution Handling on Cloud for Deployment of Big Data (20)
ANALYSIS OF ATTACK TECHNIQUES ON CLOUD BASED DATA DEDUPLICATION TECHNIQUESneirew J
ABSTRACT
Data in the cloud is increasing rapidly. This huge amount of data is stored in various data centers around the world. Data deduplication allows lossless compression by removing the duplicate data. So, these data centers are able to utilize the storage efficiently by removing the redundant data. Attacks in the cloud computing infrastructure are not new, but attacks based on the deduplication feature in the cloud computing is relatively new and has made its urge nowadays. Attacks on deduplication features in the cloud environment can happen in several ways and can give away sensitive information. Though, deduplication feature facilitates efficient storage usage and bandwidth utilization, there are some drawbacks of this feature. In this paper, data deduplication features are closely examined. The behavior of data deduplication depending on its various parameters are explained and analyzed in this paper.
SUCCESS-DRIVING BUSINESS MODEL CHARACTERISTICS OF IAAS AND PAAS PROVIDERSneirew J
ABSTRACT Market analyses show that some cloud providers are significantly more successful than others. The research on the success-driving business model characteristics of cloud providers and thus, the reasons for this performance discrepancy is, however, still limited. Whereas cloud business models have mostly been examined comprehensively, independently from the distinctly different cloud ecosystem roles, this paper takes a perspective shift from an overall towards a selective, role-specific and thereby ecosystemic perspective on cloud business models. The goal of this paper is specifically to identify the success-driving business model characteristics of the so far widely neglected cloud ecosystem’s core roles, IaaS and PaaS provider, by conducting an exploratory multiple-case study. 21 expert interviews with representatives from 17 cloud providers serve as central data collection instrument. The result is a catalogue of generic as well as cloud-specific, subdivided into role-overarching and role-specific, business model characteristics. This catalogue supports cloud providers in the initial design, comparison and revision of their business models. Researchers obtain a promising starting and reference point for future analysis of business models of various cloud ecosystem roles.
Strategic Business Challenges in Cloud Systemsneirew J
For the past few years, the evolution of cloud computing has been potentially becoming one of the major
advances in the history of computing. But is cloud computing the saviour of business? Does it signal the
demise of the corporate IT functionality entirely? However, if cloud computing has to achieve its potential,
there is a need to have a clear understanding of various issues involved, both from the perspectives of the
providers and the consumers related to the technology, management and business aspects. Objective of this
research is to explore the strategic business, management and technical challenges existing in cloud
systems. It is believed that adopting a methodology and suggesting a corresponding architectural
framework would serve as a potential comprehensive conceptual tool, which shows path for mitigating
challenges and hence effort are put in bringing in by mentioning a suitable methodology and its brief
description. It concludes that International Business Machine Common Cloud Management Platform is one
way to realize the combined features of various models such as Hub & Spoke Model as a quality of
Governance model; Gen-Spec Research Methodology design for semantic and quality research studies into
one in the form of Reference Architecture. However in order to realize the full potential of the CustomerRespond-Adapt-Sense-Provider
(conceptual) methodology for dealing with semantics, it is important to
consider Internet of Things Architecture Reference Model where in the resources are translated into
Services.
Laypeople's and Experts' Risk Perception of Cloud Computing Services neirew J
Cloud computing is revolutionising the way software services are procured and used by Government
organizations and SMEs. Quantitative risk assessment of Cloud services is complex and undermined by
specific security concerns regarding data confidentiality, integrity and availability. This study explores how
the gap between the quantitative risk assessment and the perception of the risk can produce a bias in the
decision-making process about Cloud computing adoption.
The risk perception of experts in Cloud computing (N=37) and laypeople (N=81) about ten Cloud
computing services was investigated using the psychometric paradigm. Results suggest that the risk
perception of Cloud services can be represented by two components, called “dread risk” and “unknown
risk”, which may explain up to 46% of the variance. Other factors influencing the risk perception were
“perceived benefits”, “trust in regulatory authorities” and “technology attitude”.
This study suggests some implications that could support Government and non-Government organizations
in their strategies for Cloud computing adoption.
Factors Influencing Risk Acceptance of Cloud Computing Services in the UK Gov...neirew J
Cloud Computing services are increasingly being made available by the UK Government through the
Government digital marketplace to reduce costs and improve IT efficiency; however, little is known about
factors influencing the decision making process to adopt cloud services within the UK Government. This
research aims to develop a theoretical framework to understand risk perception and risk acceptance of
cloud computing services.
Study’s subjects (N=24) were recruited from three UK Government organizations to attend a semi
structured interview. Transcribed texts were analyzed using the approach termed interpretive
phenomenological analysis. Results showed that the most important factors influencing risk acceptance of
cloud services are: perceived benefits and opportunities, organization’s risk culture and perceived risks.
We focused on perceived risks and perceived security concerns. Based on these results, we suggest a
number of implications for risk managers, policy makers and cloud service providers
A Cloud Security Approach for Data at Rest Using FPE neirew J
In a cloud scenario, biggest concern is around security of the data. “Both data in transit and at rest must
be secure” is a primary goal of any organization. Data in transit can be made secure using TLS level
security like SSL certificates. But data at rest is not quite secure, as database servers in public cloud
domain are more prone to vulnerabilities. Not all cloud providers give out of box encryption with their
offerings. Also implementing traditional encryption techniques will cause lot of changes in application as
well as at database level. This paper provides efficient approach to encrypt data using Format Preserving
Encryption technique. FPE focuses mainly on encrypting data without changing format so that it’s easy to
develop and migrate legacy application to cloud. It is capable of performing format preserving encryption
on numeric, string and the combination of both. This literature states various features and advantages of
same.
Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications neirew J
Management of errors in multi-tenant cloud based applications remains a challenging problem. This
problem is compounded due to (i) multiple versions of application serving different clients, (ii) agile nature
in which the applications are released to the clients, and (iii) variations in specific usage patterns of each
client. We propose a framework for isolating and managing errors in such applications. The proposed
framework is evaluated with two different popular cloud based applications and empirical results are
presented.
Benefits and Challenges of the Adoption of Cloud Computing in Businessneirew J
The loss of business and downturn of economics almost occur every day. Thus technology is needed in
every organization. Cloud computing has played a major role in solving the inefficiencies problem in
organizations and increase the growth of business thus help the organizations to stay competitive. It is
required to improve and automate the traditional ways of doing business. Cloud computing has been
considered as an innovative way to improve business. Overall, cloud computing enables the organizations
to manage their business efficiently. Unnecessary procedural, administrative, hardware and software costs
in organizations expenses are avoided using cloud computing. Although cloud computing can provide
advantages but it does not mean that there are no drawbacks. Security has become the major concern in
cloud and cloud attacks too. Business organizations need to be alert against the attacks to their cloud
storage. Benefits and drawbacks of cloud computing in business will be explored in this paper. Some
solutions also provided in this paper to overcome the drawbacks. The method has been used is secondary
research, that is collecting data from published journal papers and conference papers.
Intrusion Detection and Marking Transactions in a Cloud of Databases Environm...neirew J
The cloud computing is a paradigm for large scale distributed computing that includes several existing
technologies. A database management is a collection of programs that enables you to store, modify and
extract information from a database. Now, the database has moved to cloud computing, but it introduces at
the same time a set of threats that target a cloud of database system. The unification of transaction based
application in these environments present also a set of vulnerabilities and threats that target a cloud of
database environment. In this context, we propose an intrusion detection and marking transactions for a
cloud of database environment.
A Survey on Resource Allocation in Cloud Computingneirew J
Cloud computing is an on-demand service resource which includes applications to data centers on a
pay-per-use basis. In order to allocate these resources properly and satisfy users’ demands, an efficient
and flexible resource allocation mechanism is needed. Due to increasing user demand, the resource
allocating process has become more challenging and difficult. One of the main focuses of research
scholars is how to develop optimal solutions for this process. In this paper, a literature review on proposed
dynamic resource allocation techniques is introduced.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...neirew J
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
Cloud Computing is an attractive research area for the last few years; and there have been a tremendous
grows in the number of educational institutions all over the world who have either adopted or are
considering migrating to cloud computing. However, there are many concerns and reservations about
adopting conventional or public cloud based solutions. A new paradigm of cloud based solution has been
proposed, namely, the private cloud based solutions, which becomes an attractive choice to educational
Institutions. This paper presents the adjustment and implementation of private-based cloud solution for
multi-campus educational institution, namely, Al-Balqa Applied University (BAU) in Jordan.
Implementation of the Open Source Virtualization Technologies in Cloud Computingneirew J
The “Virtualization and Cloud Computing” is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the “Virtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
A Broker-based Framework for Integrated SLA-Aware SaaS Provisioning neirew J
In the service landscape, the issues of service selection, negotiation of Service Level Agreements (SLA), and
SLA-compliance monitoring have typically been used in separate and disparate ways, which affect the
quality of the services that consumers obtain from their providers. In this work, we propose a broker-based
framework to deal with these concerns in an integrated mannerfor Software as a Service (SaaS)
provisioning. The SaaS Broker selects a suitable SaaS provider on behalf of the service consumer by using
a utility-driven selection algorithm that ranks the QoS offerings of potential SaaS providers. Then, it
negotiates the SLA terms with that provider based on the quality requirements of the service consumer. The
monitoring infrastructure observes SLA-compliance during service delivery by using measurements
obtained from third-party monitoring services. We also define a utility-based bargaining decision model
that allows the service consumer to express her sensitivity for each of the negotiated quality attributes and
to evaluate the SaaS provider offer in each round of negotiation. A use-case with few quality attributes and
their respective utility functions illustrates the approach.
Comparative Study of Various Platform as a Service Frameworks neirew J
Cloud computing is an emerging paradigm with three basic service models such as Software as a Service
(SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). This paper focuses on
different kinds of PaaS frameworks. PaaS model provides choice of cloud, developer framework and
application service. In this paper, detailed study of four open PaaS frameworks like AppScale, Cloud
Foundry, Cloudify, and OpenShift are explained with the architectural components. We also explained
more PaaS packages like Stratos, mOSAIC, BlueMix, Heroku, Amazon Elastic Beanstalk, Microsoft Azure,
Google App Engine and Stakato briefly. In this paper we present the comparative study of PaaS
frameworks.
A Proposed Model for Improving Performance and Reducing Costs of IT Through C...neirew J
Information technologies are affecting the big business enterprises of todays from data processing and
transactions to achieve the goals efficiently and effectively, affecting creates new business opportunities
and towards new competitive advantage, service must be enough to match the recent trends of IT such as
cloud computing. Cloud computing technology has provided all IT services. Therefore, cloud computing
offers an alternative to adaptable with technology model current , creating reducing cost (Fixed costs and
ongoing), the proliferation of high speed Internet connections through Rent, not acquisitions, cheaper
powerful computing technology and effective performance. The public and private clouds are characterized
by flexibility, operational efficiency that reduces costs improve performance. Also cloud computing
generates business creativity and innovation resulted from collaborative ideas of users; presents cloud
infrastructure and services; paving new markets; offering security in public and private clouds; and
providing environmental impact regarding utilizing green energy technology. In this paper, the main
concentrate the cloud computing.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure
channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain
a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and
analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has
presented using Petrinet production model. We present the designed SCTP petrinet models and its
analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect
against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization,
security and intruder prevention.
Attribute Based Access Control (ABAC) for EHR in Fog Computing Environmentneirew J
Cisco recently proposed a new computing environment called fog computing to support latency-sensitive
and real time applications. It is a connection of billions of devices nearest to the network edge. This
computing will be appropriate for Electronic Medical Record (EMR) systems that are latency-sensitive in
nature. In this paper, we aim to achieve two goals: (1) Managing and sharing Electronic Health Records
(EHRs) between multiple fog nodes and cloud, (2) Focusing on security of EHR, which contains highly
confidential information. So, we will secure access into EHR on Fog computing without effecting the
performance of fog nodes. We will cater different users based on their attributes and thus providing
Attribute Based Access Control ABAC into the EHR in fog to prevent unauthorized access. We focus on
reducing the storing and processes in fog nodes to support low capabilities of storage and computing of fog
nodes and improve its performance.
A Baye's Theorem Based Node Selection for Load Balancing in Cloud Environmentneirew J
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches.
Lessons Learned From Implementing a Scalable PAAS Service by Using Single Boa...neirew J
When a Platform-as-a-Service is demanded and the cost for purchase and operation of servers,
workstations or personal computers is a challenge, single board computers may be an option to build an
inexpensive system. This paper describes the lessons learned from deploying the private cloud PaaS
solution AppScale on single-node systems and clusters of single board computers.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
How libraries can support authors with open access requirements for UKRI fund...
Data Distribution Handling on Cloud for Deployment of Big Data
1. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 3, June 2016
DOI: 10.5121/ijccsa.2016.6302 15
DATA DISTRIBUTION HANDLING ON CLOUD FOR
DEPLOYMENT OF BIG DATA
Samip Raut, Kamlesh Jaiswal, Vaibhav Kale, Akshay Mote, Ms. Soudamini
Pawar and Mrs. Suvarna Kadam
D. Y. Patil College of Engineering, Akurdi, Savitribai Phule Pune University, Pune
ABSTRACT
Cloud computing is a new emerging model in the field of computer science. For varying workload Cloud
computing presents a large scale on demand infrastructure. The primary usage of clouds in practice is to
process massive amounts of data. Processing large datasets has become crucial in research and business
environments. The big challenges associated with processing large datasets is the vast infrastructure
required. Cloud computing provides vast infrastructure to store and process Big data. Vms can be
provisioned on demand in cloud to process the data by forming cluster of Vms . Map Reduce paradigm can
be used to process data wherein the mapper assign part of task to particular Vms in cluster and reducer
combines individual output from each Vms to produce final result. we have proposed an algorithm to
reduce the overall data distribution and processing time. We tested our solution in Cloud Analyst
Simulation environment wherein, we found that our proposed algorithm significantly reduces the overall
data processing time in cloud.
KEYWORDS
Cloud Computing, Big Bata, Cloud Analyst, Map Reduce, Big data distribution
1. INTRODUCTION
Management and Processing of large dataset is becoming more important in research and
business environment. Big data processing engines have experienced a tremendous growth. The
big challenge with large data set processing is the infrastructure required, which can demand
large investment. Thus, cloud computing can significantly reduce the infrastructure capital
expenditure, providing new business models in which provider offer on-demand virtualized
infrastructure. For accommodates varying workloads cloud computing presents the large scale on
demand infrastructure. The main data crunching technique in which data is to the computational
nodes, which were shared. Big data is a set of large datasets that cannot be processed by
traditional computing technique. Big data technologies are important in providing more exact
analysis, which may lead to further actual decision-making resulting in a greater operational
efficiency, reduced risk for business and cost reduction. In order to handle big data there are
several technologies from various vendors like Amazon, IBM, Microsoft, etc. For cloud
computing as well as a distributed file system Hadoop provides an open source construction.
Hadoop uses the Map Reduce model. HDFS is a file system, to done the tasks it uses the Map
Reduce. In which it reads the input in huge chunks, process on that input and finally write huge
chunks of output. HDFS does not handle arbitrary access well. HDFS service is provided by two
processes: Name Node and Data Node. Name Node handles the file system management and
provides control management and services. Data Node provides block storage and retrieval
services. In HDFS file system there will be one Name Node process, and this is a single point of
failure. Hadoop Core provides the Name Node automatic backup and recovery, but there is no fail
over services.
2. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 3, June 2016
16
The main objective of our proposed system is to significantly reduce the data distribution time
over virtual clusters for data processing in the cloud. This will help to provide faster and secure
processing and management on large datasets. Various data distribution techniques are discussed
in Section 2. Section 3 presents related work. Section 4 presents our proposed system design.
Section 5 presents implementation details. Section 6, presents experimental results of executing
task in Cloud Sim. Section 7 concludes the paper with future work.
2. DATA DISTRIBUTION TECHNIQUE
Data distribution is a technique of distributing the partitioned data over several provisioned VMs.
In this technique, the load balancing factor need to taken into concern so as to distribute equal
amount of data among all provisioned VMs. The different approaches for performing data
distribution among provisioned VMs are[1]:
2.1 Centralised Approach
In this approach, a central repository is used to download the required dataset by VMs. In
initialization script all VMs connects to the central repository, so after boot VMs can get the
required data. A central server bandwidth becomes bottleneck for the whole transfer. The
limitation of centralised approach if transfers are requested in parallel is central server will drop
connections and get a “flash crowd effect” that can caused by thousands of VMs requesting
blocks of data.
2.2 Semi-Centralised Approach
In centralised approach, if more VMs requested in parallel to the central server then server will
drop connections. So, semi-centralised approaches potentially reduce the networking
infrastructure stress. Then it would be possible to share the dataset across different machines in
the data centre. By this VMs do not get the same shared at the same time. The limitation of this
approach is when the datasets change over time. The datasets may grow or expands its size in
time then it is difficult to foresee the bias.
2.3 Hierarchical Approach
If new data are continuously added then Semi centralised approach is very hard to maintain. In
hierarchical approach, there is build a relay tree where data not gets from the original store by
VMs, but data is get from parent node in the hierarchy. In this way all VMs will access the central
server to fetch data, and again this fetch data is provide to other VMs and so on. The limitation of
this approach is that it cannot provides fault tolerance during the transfer, and if one of the VM
gets stuck then the VM deployments fails after the transfers have been initiated.
2.4 P2P Approach
Hierarchical approach requires more synchronization and some P2P streaming overlays like
PPLive or Sopcast that are based on hierarchical multi trees (a node belongs into several trees) are
used to implement this approach. In this approach, each system act as server as well as client. For
accessing the VMs the data centre environment presents low-latency, no Firewall or NAT issues,
and no ISP traffic shaping to deliver a P2P delivery approach for big data in the data centre.
3. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 3, June 2016
17
3. RELATED WORK
Several data distribution handling technique are proposed. The most efficient data distribution
technique is peer-to-peer. In [2], S. Loughran et al. presents framework that enables dynamic
deployment of a MapReduce service in virtual infrastructures from either public or private cloud
providers. The strategy is followed by popular MapReduce to move the computation to the
location where data is stored rather than data to computational node[3]. The deployment process
architecture creates a set of virtual machines (VMs) according to user-provided specifications.
The framework automatically sets up a complete MapReduce architecture using a service catalog,
then processes the data. In order to distribute data among provisioned VMs, the data partitioning
is done. Data partitioning the data input is splits. This input splits into the multiple chunks. In
partitioning service partitioning is done at a central location. Processing the data this data chunks
to be distributed among the VMs.[1]. Service capacity of p2p system is modified in two regimes.
One is the transient phase and second is steady phase. . In transient phase analytical model busty
demands is tries to catch up by system and trace measurement that exhibit the exponential growth
of service capacity. In second regime, the steady phase the service capacity of p2p system scales
with and track the offered load and rate at which the peer exit the system.[5]. In [6], Gang Chen
presents BestPeer++ system in which integrating cloud computing, database, and peer-to-peer
technologies to delivers elastic data sharing services. Previously to enhance the usability of P2P
networks, database community have proposed a series of peer-to-peer database management
system. In [7], Mohammed Radi proposed a Round Robin Service Broker policy is select the
data centre to process the request. In [9], Tanveer Ahmed Yogendra Singhr presents a comparison
of various policies utilized for load balancing using a tool called cloud analyst. The various
policies that have being compared include Round Robin, Equally spread current execution load,
Throttled Load balancing. In results the overall response time of Round Robin policy and ESCEL
policy is almost same and that of Throttled policy is very low as compared to Round Robin policy
and ESCEL policy. In [10], Soumya Ray and Ajanta De Sarkar present a concept of Cloud
Computing along with research challenges in load balancing. It also focus on advantage and
disadvantages of the cloud computing. It also focus on the study of load balancing algorithm and
comparative survey of the algorithms in cloud computing with respect to resource utilization,
stability, static or dynamicity and process migration. In [11], Pooja Samal, Pranati Mishra
presents load distribution problem on various nodes of a distributed system which is solved by the
proposed work. That improves both resource utilization and job response time by analyzing the
variants of Round Robin algorithm.
4. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 3, June 2016
18
4. DESIGN OF OUR PROPOSED SYSTEM
Figure 1: System Architecture
4.1 User Input Processing
The user can provide the some parameters to virtual infrastructure such as number and size of
slave nodes. In addition to this user also specify the input files and the output folder location. This
is the only done by end user and user interaction with the system, proposed framework done the
rest of the process automatically.
4.2 Centralised Partitioning
The user can upload huge file on cloud. Centralise data partitioning split the input bulk data into
multiple chunks. User can upload Large data file that file are partitioned and stored in cloud
severs, in partitioning break the large files into a smaller chunks and it also reduces the storage
server burden. Partition takes automatically when file is uploaded.
4.3 Data Distribution
The data chunks from the data repository are distributed to the VMs. These VMs will process the
data. Distribute the data on each VM is an NP-hard problem. Our proposed system uses Peer-to-
Peer data distribution technique in which there is point to point connection between the VMs.
Service capacity of p2p system is modified in two regimes. One is the transient phase and second
is steady phase. In transient phase analytical model busty demands is tries to catch up by system
and trace measurement that exhibit the exponential growth of service capacity. In second regime,
the steady phase the service capacity of p2p system scales with and track the offered load and rate
at which the peer exit the system.
5. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 3, June 2016
19
5. IMPLEMENTATION DETAILS
Installation of Ubuntu 14.04, Hadoop 2.7.1, Eclipse. Additional component that are require by
the Hadoop are installed using apt-get-install necessary Linux packages such as java JDK 1.7.
5.1 Cloud Analyst:
Cloud Analyst is a GUI based tool that has been developed on CloudSim architecture. CloudSim
is a toolkit that is allowed to do modeling, simulation and other experimentation. The main
problem in CloudSim is that every work has to be done programmatically. It allow user to do
repeated simulation with small change in parameter very simply and rapidly. The cloud analyst
allows setting location of the users that generating application and also location of data centers. In
Cloud Analyst various configuration parameter can be set like number of users, number of VMs,
number of processors, network bandwidth, amount of storage and other parameters. Based on
parameters tool computes the simulation result and show them in a graphical form. The result
consists of response time, processing time, and cost.[13]
Figure 2: Cloud Analyst Architecture [13]
5.2 Methodologies of Problem solving and Efficiency issues
The two main methodology used to balance the load among the VMs are Throttled and Round
Robin VM load balancing policy.
a) Throttled: In throttled algorithm to perform the required operation the client requests the load
balancer to find a suitable VM. Firstly the process started by maintaining the entire VMs list, each
row is indexed individually to speed up the lookup process. If a match of the machine is found on
the basis of size and availability, then the load balancer accepts the request of the client and
allocates that VM to the client. If there is no VM available that matches the criteria then the load
balancer returns -1 and the request is queued.
b) Round Robin: Round Robin is the simplest scheduling techniques that utilize the principle of
time slices. In Round Robin the time is divided into multiple slices and a particular time slice is
given to each node i.e. it utilizes the time scheduling principle. A quantum is given to each node
and the node will perform its operations in this quantum. On the basis of this time slice the
resources are provided to the requesting client by the service provider. Therefore, Round Robin
algorithm is very simple but in this algorithm on the scheduler there is an additional load that
decide the quantum size.[8]
6. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 3, June 2016
20
6. RESULT ANALYSIS
Round Robin Load Balancer Algorithm and Throttled load balancer algorithm are implemented
for a simulation. Java language has been used for implementing VM load balancing algorithm. In
cloud analyst tool configuration of the various components need to be set to analyze load
balancing policies. We have to set the parameters for the data center configuration, user base
configuration, and application deployment configuration. We have taken two data centers. The
duration of simulation is 60hrs.
Parameter Value Used
VM Image Size 10000
VM Memory 1024 MB
VM Bandwidth 1000
Architecture(Data Center) X86
Operating System(Data Center) Linux
VMM(Data Center) Xen
No. of Machines(Data Center) 50
No. of processors per machine(Data
Center)
4
Processor Speed(Data Center) 100 MIPS
VM Policy(Data Center) Time Shared
User Grouping Factor 1000
Request Grouping Factor 100
Executable Instruction Length 250
Table 1: Parameters value
Following table show a overall response time of VM load balancing algorithm.
Number of
VM’s
Overall average response time (milliseconds)
Round Robin Throttled
50 220.9 150.93
100 226.18 151.12
200 228.81 152.85
Table 2: Comparison of average response time of VM Load balancing Algorithm
In Cloud Analyst result are computed after performing the simulation is as shown in the
following figures.
7. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No.
Figure 3: Comparison of average
The above shown figure 3 and table
Robin are much greater whereas overall
Therefore, we can easily identify
migration of load is done in virtual machine.
7. CONCLUSION
This paper presents a concept of cloud computing and focus on the challenges of the load
balancing. It also focus on the time requires to process the big data. We have also gone through
the comparative study of big data, hadoop and cloud computing. We have also seen the
connectivity of cloud with the hado
load balancing algorithm, followed by hadoop’s map
aims to achieve qualitative analysis on previous VM load balancing algorithm and then
implemented in CloudSim and java language along with the hadoop.
will develop the performance of cloud service substantial. It will prevent overloading of the
server which degrades the performance and response time will also be improved.
simulated two different scheduling algorithms for executing user r
Every algorithm is observed and their scheduling criterion likes average response time, data
centre derive. We efficiently used the hadoop in order analysis the data which is enhanced in
cloud.
8. REFERENCES
[1] Luis M. Vaquero, Member, Antonio Celorio, Felix Cua
(2015) “Deploying Large-Scale Datasets on
Distribution” IEEE Trans. vol. 3, no. 2.
[2] S.Loughran, J.Alcaraz, Caleroand
architecture,” IEEEInternetComput.,vol.16,no.6,pp.40
[3] Jeffrey Dean and Sanjay Ghemawat
Clusters”.
International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No.
: Comparison of average response time of VM Load balancing Algorithm
and table 2 clearly indicates that the overall response time in Round
are much greater whereas overall response time are improved in Throttled algorithm.
identify among all the algorithm Throttled algorithm is best. In this
load is done in virtual machine.
This paper presents a concept of cloud computing and focus on the challenges of the load
the time requires to process the big data. We have also gone through
the comparative study of big data, hadoop and cloud computing. We have also seen the
nectivity of cloud with the hadoop. Major amount of time is given for the study of different
alancing algorithm, followed by hadoop’s map-reduce for data partitioning. This paper
qualitative analysis on previous VM load balancing algorithm and then
implemented in CloudSim and java language along with the hadoop. Load balancing on t
will develop the performance of cloud service substantial. It will prevent overloading of the
which degrades the performance and response time will also be improved.
simulated two different scheduling algorithms for executing user request in a cloud environment.
Every algorithm is observed and their scheduling criterion likes average response time, data
centre derive. We efficiently used the hadoop in order analysis the data which is enhanced in
Antonio Celorio, Felix Cuadrado, Member, IEEE, and Ruben
Scale Datasets on-Demand in the Cloud: Treats and Tricks on Data
ion” IEEE Trans. vol. 3, no. 2.
S.Loughran, J.Alcaraz, Caleroand J.Guijarro, (2012) “Dynamic cloud deployment of a
architecture,” IEEEInternetComput.,vol.16,no.6,pp.40–50.
Jeffrey Dean and Sanjay Ghemawat,(2008) “MapReduce: Simplified Data Processing on Large
International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 3, June 2016
21
response time of VM Load balancing Algorithm
response time in Round
improved in Throttled algorithm.
tled algorithm is best. In this live
This paper presents a concept of cloud computing and focus on the challenges of the load
the time requires to process the big data. We have also gone through
the comparative study of big data, hadoop and cloud computing. We have also seen the
op. Major amount of time is given for the study of different
reduce for data partitioning. This paper
qualitative analysis on previous VM load balancing algorithm and then
Load balancing on the cloud
will develop the performance of cloud service substantial. It will prevent overloading of the
which degrades the performance and response time will also be improved. We have
equest in a cloud environment.
Every algorithm is observed and their scheduling criterion likes average response time, data
centre derive. We efficiently used the hadoop in order analysis the data which is enhanced in
drado, Member, IEEE, and Ruben Cuevas,
Demand in the Cloud: Treats and Tricks on Data
of a mapreduce
MapReduce: Simplified Data Processing on Large
8. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 3, June 2016
22
[4] L. Garcés-EriceAffiliated withInstitut EURECOM, E. W. Biersack, P. A. Felbe, K. W. Ross,
G. Urvoy-Keller,( 2003) “Hierarchical Peer-to-Peer Systems” Volume 2790 of the series Computer
Science pp 1230-1239.
[5] Xiangying Yang and Gustavo de Veciana, (2004) “Service Capacity of Peer to Peer Networks” IEEE
[6] Gang Chen, Tianlei Hu, Dawei Jiang, Peng Lu, Kian-Lee Tan, Hoang Tam Vo, and Sai Wu, (2014)
”BestPeer++:A Peer-to-Peer Based Large-Scale Data Processing Platform” IEEE Trans. vol. 26, no.
[7] Mohammed Radi, (2014)“Efficient Service Broker Policy For Large-Scale Cloud Environments”
IJCCSA, Vol.2, No.
[8] Tejinder Sharma, Vijay Kumar Banga, (2013)“Efficient and Enhanced Algorithm in Cloud
Computing” IJSCE ISSN: 2231-2307, Volume-3, Issue-1
[9] Tanveer Ahmed , Yogendra Singh, (2012)”Analytic Study Of Load Balancing Techniques Using Tool
Cloud Analyst.” IJERA, Vol. 2, Issue 2, pp.1027-1030
[10] Soumya Ray and Ajanta De Sarkar (2012)“Execution Analysis of Load Balancing Algorithm in
Cloud Computing Environment” IJCCSA, Vol.2, No.5
[11] Pooja Samal, Pranati Mishra, (2013)“Analysis of variants in Round Robin Algorithms for load
balancing in Cloud Computing” IJCSIT, Vol. 4 (3) , 416-419
[12] Samip Raut, Kamlesh Jaiswal, Vaibhav Kale, Akshay Mote, Soudamini Pawar, Hema Kolla
(2016)“Survey on Data Distribution Handling Techniques on Cloud” IJRAET, Volume-4, Issue -7
[13] Bhathiya Wickremasinghe “CloudAnalyst: A CloudSim-based Tool for Modelling and Analysis of
Large Scale Cloud Computing Environment”.
AUTHORS
Samip Raut is pursuing the Bachelor’s degree in Computer Engineering from SPPU,
Pune. His area of interest includes Cloud Computing and Networking.
Kamlesh Jaiswal is pursuing the Bachelor’s degree in Computer Engineering from
SPPU, Pune. His area of interest includes Cloud Computing and Big Data.
Vaibhav Kale is pursuing the Bachelor’s degree in Computer Engineering from SPPU,
Pune. Area of interest includes Cloud Computing and Big Data.
Akshay Mote is pursuing the Bachelor’s degree in Computer Engineering from SPPU,
Pune. His area of interest includes Cloud Computing and Big Data.
Ms. Soudamini Pawar has received her BE(CSE) from Gulbarga University, Gulbarga
and her ME from SPPU, Pune. She has teaching experience of about 12 years. She is
currently working as Assistant Professor in D. Y. Patil college of Engineering, Akurdi,
Pune.
Mrs. Suvarna Kadam has completed her PG in Computer Engineering from SPPU,
Pune. She has 15 years of experience in computing with variety of roles including
developer, entrepreneur and researcher. She is currently working at Department of
Computer Engineering as Asst. Professor and enjoys guiding UG students in the state of
the art areas of research including machine learning and high performance computing.