Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
The Embedded Technology industry is experiencing two major trends. On one hand, computation is moving away from traditional desktop and department-level computer centers On the other hand, the increasing majority of these clients consist of a growing variety of embedded devices, such as smart phones, tablet computers and television set-top boxes (STB), whose capabilities continue to improve while also providing data locality associated to data-intensive application processing of interest Indeed
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
Complex Measurement Systems in Medicine: from Synchronized Monotask Measuring...ITIIIndustries
Design problems of flexible computer systems for physiological researches are discussed. The widespread case of employing of commercial medical devices as parts of the resulting computer system is analyzed. To overcome most of the arising difficulties, we propose using of the universal synchronizing device and the modular script-based software. The prospects of such computer systems are outlined as an evolution of them into cyber-physical systems with on-demand plugging in of required hardware modules.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The Embedded Technology industry is experiencing two major trends. On one hand, computation is moving away from traditional desktop and department-level computer centers On the other hand, the increasing majority of these clients consist of a growing variety of embedded devices, such as smart phones, tablet computers and television set-top boxes (STB), whose capabilities continue to improve while also providing data locality associated to data-intensive application processing of interest Indeed
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
Complex Measurement Systems in Medicine: from Synchronized Monotask Measuring...ITIIIndustries
Design problems of flexible computer systems for physiological researches are discussed. The widespread case of employing of commercial medical devices as parts of the resulting computer system is analyzed. To overcome most of the arising difficulties, we propose using of the universal synchronizing device and the modular script-based software. The prospects of such computer systems are outlined as an evolution of them into cyber-physical systems with on-demand plugging in of required hardware modules.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
Data Distribution Handling on Cloud for Deployment of Big Dataijccsa
Cloud computing is a new emerging model in the field of computer science. For varying workload Cloud computing presents a large scale on demand infrastructure. The primary usage of clouds in practice is to process massive amounts of data. Processing large datasets has become crucial in research and business environments. The big challenges associated with processing large datasets is the vast infrastructure required. Cloud computing provides vast infrastructure to store and process Big data. Vms can be provisioned on demand in cloud to process the data by forming cluster of Vms . Map Reduce paradigm can be used to process data wherein the mapper assign part of task to particular Vms in cluster and reducer combines individual output from each Vms to produce final result. we have proposed an algorithm to reduce the overall data distribution and processing time. We tested our solution in Cloud Analyst Simulation environment wherein, we found that our proposed algorithm significantly reduces the overall data processing time in cloud.
A novel resource efficient dmms approach for network monitoring and controlli...ijwmn
In this paper, we propose a novel Distributed MANET Management System (DMMS) approach to use cross layer models to demonstrate a simplified way of efficiently managing the overall performance of individual network resources (nodes) and the network itself which is critical for not only monitoring the traffic, but also dynamically controlling the end-to-end Quality of Service (QoS) for different applications. In the proposed DMMS architecture, each network resource maintains a set of Management Information Base (MIB) elements and stores resource activities in their abstraction in terms of counters, timer, flag and threshold values. The abstract data is exchanged between different management agents residing in different resources on a need-to-know basis and each agent logically executes management functions locally to develop understanding of the behavior of all network resources to ensure that user protocols can function smoothly. However, in traditional network management systems, they collect statistical data such as resource usage and performance by spoofing of resources. The amount of data that is exchanged with other resources through management protocols that can be extremely high and the bandwidth for overhead management functions increases significantly. Also, the data storage requirements in each network resource for management functions increases and become inefficient as it increases the power usage for processing. Our proposed scheme targets at solving the problems.
Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism
works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time,
improve the resource utilization and also improve the server performance and throughput. This method or
protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols.
Here we assign the priority to the job which gives better performance to the computer and try my best to
minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs
for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A hybrid algorithm to reduce energy consumption management in cloud data centersIJECEIAES
There are several physical data centers in cloud environment with hundreds or thousands of computers. Virtualization is the key technology to make cloud computing feasible. It separates virtual machines in a way that each of these so-called virtualized machines can be configured on a number of hosts according to the type of user application. It is also possible to dynamically alter the allocated resources of a virtual machine. Different methods of energy saving in data centers can be divided into three general categories: 1) methods based on load balancing of resources; 2) using hardware facilities for scheduling; 3) considering thermal characteristics of the environment. This paper focuses on load balancing methods as they act dynamically because of their dependence on the current behavior of system. By taking a detailed look on previous methods, we provide a hybrid method which enables us to save energy through finding a suitable configuration for virtual machines placement and considering special features of virtual environments for scheduling and balancing dynamic loads by live migration method.
The increasing demands on the usage of data centers especially in provisioning cloud
applications (i.e. data-intensive applications) have drastically increased the energy consumption and
becoming a critical issue. Failing to handle the increasing in energy consumption leads to the negative
impact on the environment, and also negatively affecting the cloud providers' profits due to increasing
costs. Various surveys have been carried out to address and classify energy-aware approaches and
solutions. As an active research area with increasing number of proposals, more surveys are needed to
support researchers in the research area. Thus, in this paper, we intend to provide the current state of
existing related surveys that serve as a guideline for the researchers as well as the potential reviewers to
embark into a new concern and dimension to compliment existing related surveys. Our review highlights
four main topics and concludes to some recommendations for the future survey.
Adaptive Offloading in Mobile Cloud Computing by automatic partitioning approach of tasks is the idea to augment execution through migrating heavy computation from mobile devices to resourceful cloud servers and then receive the results from them via wireless networks. Offloading is an effective way to
overcome the resources and functionalities constraints
of the mobile devices since it can release them from
intensive processing and increase performance of the
mobile applications, in terms of response time.
Offloading brings many potential benefits, such as
energy saving, performance improvement, reliability
improvement, ease for the software developers and
better exploitation of contextual information.
Parameters about method transitions, response times,
cost and energy consumptions are dynamically reestimated
at runtime during application executions.
Achieving High Performance Distributed System: Using Grid, Cluster and Cloud ...IJERA Editor
To increase the efficiency of any task, we require a system that would provide high performance along with flexibilities and cost efficiencies for user. Distributed computing, as we are all aware, has become very popular over the past decade. Distributed computing has three major types, namely, cluster, grid and cloud. In order to develop a high performance distributed system, we need to utilize all the above mentioned three types of computing. In this paper, we shall first have an introduction of all the three types of distributed computing. Subsequently examining them we shall explore trends in computing and green sustainable computing to enhance the performance of a distributed system. Finally presenting the future scope, we conclude the paper suggesting a path to achieve a Green high performance distributed system using cluster, grid and cloud computing.
ADVANCES IN HIGHER EDUCATIONAL RESOURCE SHARING AND CLOUD SERVICES FOR KSAIJCSES Journal
Cloud represents an important change in the way information technology is used. Cloud makes it possible
to access work anywhere anytime and to share it with anyone [1]. It is changing the way people
communicate, work and learn [2]. In this changing environment, it is important to think about the
opportunities and risks of using the cloud in the education field, and the lessons we can learn from the
previous uses of this technology in the education field. In order to gain the benefits of the cloud to be used
in educational system in KSA, a comprehensive study on scientific literatures in this paper. This paper also
presents the significant information such as the findings, the case studies, related frameworks and
supporting also the tools associated to the migration of organizational resources to cloud
Stream Processing Environmental Applications in Jordan ValleyCSCJournals
Database system architectures have been gone through innovative changes, specially the unifications of algorithms and data via the integration of programming languages with the database system. Such an innovative changes is needed in Stream-based applications since they have different requirements for the principal of stream data processing system. For example, the monitoring component requires query processing system to detect user-defined events in a timely manner as in real time monitoring system. Furthermore, stream processing fits a large class of new applications for which conventional DBMSs fall short since many stream-oriented systems are inherently geographically distributed and the distribution offers a scalable load management and higher availability. This paper presents statistical information about metrological data such as the weather, soil and evapotranspiration as collected by the weather stations distributed in different locations in Jordan Valley. In addition, it shows the importance of Stream Processing in some real life applications, and shows how the database systems can help researcher in building prototypes that can be implemented and used in a continuous monitoring system.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
Load balancing in public cloud by division of cloud based on the geographical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The Method of Repeated Application of a Quadrature Formula of Trapezoids and ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
CFD Analysis and Fabrication of Aluminium Cenosphere CompositesIJMER
Metal matrix composites are engineered materials with a combination of two or more
dissimilar materials, to obtain enhanced properties. Aluminium alloys reinforced with ceramic
particles exhibit superior mechanical properties when compared to unreinforced aluminium alloys and
hence are candidate for engineering applications. In the present investigation aluminium alloy is used
as the matrix and cenosphere as the reinforcing material. The hybrid metal matrix composite is
produced using conventional foundry techniques by casting route. The cenosphere is to be added in
2%, 4% and 6% by volume and also with the influence of the particle size of cenosphere, to the molten
metal with Magnesium, which is the main parameter for the wet ability of cenosphere and aluminium
alloy. The hybrid composite is to be tested for hardness, density, mechanical properties and impact
strength. The density decreases with increase in cenosphere content. The impact strength increases
with increase in cenosphere content. The resistances to dry wear and slurry erosive wear increases
with increase in cenosphere content and hence this material can be used as bearing material. This
composite material being less dense than aluminium can therefore be used in place of conventional
aluminium alloys in aircraft components
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
Data Distribution Handling on Cloud for Deployment of Big Dataijccsa
Cloud computing is a new emerging model in the field of computer science. For varying workload Cloud computing presents a large scale on demand infrastructure. The primary usage of clouds in practice is to process massive amounts of data. Processing large datasets has become crucial in research and business environments. The big challenges associated with processing large datasets is the vast infrastructure required. Cloud computing provides vast infrastructure to store and process Big data. Vms can be provisioned on demand in cloud to process the data by forming cluster of Vms . Map Reduce paradigm can be used to process data wherein the mapper assign part of task to particular Vms in cluster and reducer combines individual output from each Vms to produce final result. we have proposed an algorithm to reduce the overall data distribution and processing time. We tested our solution in Cloud Analyst Simulation environment wherein, we found that our proposed algorithm significantly reduces the overall data processing time in cloud.
A novel resource efficient dmms approach for network monitoring and controlli...ijwmn
In this paper, we propose a novel Distributed MANET Management System (DMMS) approach to use cross layer models to demonstrate a simplified way of efficiently managing the overall performance of individual network resources (nodes) and the network itself which is critical for not only monitoring the traffic, but also dynamically controlling the end-to-end Quality of Service (QoS) for different applications. In the proposed DMMS architecture, each network resource maintains a set of Management Information Base (MIB) elements and stores resource activities in their abstraction in terms of counters, timer, flag and threshold values. The abstract data is exchanged between different management agents residing in different resources on a need-to-know basis and each agent logically executes management functions locally to develop understanding of the behavior of all network resources to ensure that user protocols can function smoothly. However, in traditional network management systems, they collect statistical data such as resource usage and performance by spoofing of resources. The amount of data that is exchanged with other resources through management protocols that can be extremely high and the bandwidth for overhead management functions increases significantly. Also, the data storage requirements in each network resource for management functions increases and become inefficient as it increases the power usage for processing. Our proposed scheme targets at solving the problems.
Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism
works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time,
improve the resource utilization and also improve the server performance and throughput. This method or
protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols.
Here we assign the priority to the job which gives better performance to the computer and try my best to
minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs
for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A hybrid algorithm to reduce energy consumption management in cloud data centersIJECEIAES
There are several physical data centers in cloud environment with hundreds or thousands of computers. Virtualization is the key technology to make cloud computing feasible. It separates virtual machines in a way that each of these so-called virtualized machines can be configured on a number of hosts according to the type of user application. It is also possible to dynamically alter the allocated resources of a virtual machine. Different methods of energy saving in data centers can be divided into three general categories: 1) methods based on load balancing of resources; 2) using hardware facilities for scheduling; 3) considering thermal characteristics of the environment. This paper focuses on load balancing methods as they act dynamically because of their dependence on the current behavior of system. By taking a detailed look on previous methods, we provide a hybrid method which enables us to save energy through finding a suitable configuration for virtual machines placement and considering special features of virtual environments for scheduling and balancing dynamic loads by live migration method.
The increasing demands on the usage of data centers especially in provisioning cloud
applications (i.e. data-intensive applications) have drastically increased the energy consumption and
becoming a critical issue. Failing to handle the increasing in energy consumption leads to the negative
impact on the environment, and also negatively affecting the cloud providers' profits due to increasing
costs. Various surveys have been carried out to address and classify energy-aware approaches and
solutions. As an active research area with increasing number of proposals, more surveys are needed to
support researchers in the research area. Thus, in this paper, we intend to provide the current state of
existing related surveys that serve as a guideline for the researchers as well as the potential reviewers to
embark into a new concern and dimension to compliment existing related surveys. Our review highlights
four main topics and concludes to some recommendations for the future survey.
Adaptive Offloading in Mobile Cloud Computing by automatic partitioning approach of tasks is the idea to augment execution through migrating heavy computation from mobile devices to resourceful cloud servers and then receive the results from them via wireless networks. Offloading is an effective way to
overcome the resources and functionalities constraints
of the mobile devices since it can release them from
intensive processing and increase performance of the
mobile applications, in terms of response time.
Offloading brings many potential benefits, such as
energy saving, performance improvement, reliability
improvement, ease for the software developers and
better exploitation of contextual information.
Parameters about method transitions, response times,
cost and energy consumptions are dynamically reestimated
at runtime during application executions.
Achieving High Performance Distributed System: Using Grid, Cluster and Cloud ...IJERA Editor
To increase the efficiency of any task, we require a system that would provide high performance along with flexibilities and cost efficiencies for user. Distributed computing, as we are all aware, has become very popular over the past decade. Distributed computing has three major types, namely, cluster, grid and cloud. In order to develop a high performance distributed system, we need to utilize all the above mentioned three types of computing. In this paper, we shall first have an introduction of all the three types of distributed computing. Subsequently examining them we shall explore trends in computing and green sustainable computing to enhance the performance of a distributed system. Finally presenting the future scope, we conclude the paper suggesting a path to achieve a Green high performance distributed system using cluster, grid and cloud computing.
ADVANCES IN HIGHER EDUCATIONAL RESOURCE SHARING AND CLOUD SERVICES FOR KSAIJCSES Journal
Cloud represents an important change in the way information technology is used. Cloud makes it possible
to access work anywhere anytime and to share it with anyone [1]. It is changing the way people
communicate, work and learn [2]. In this changing environment, it is important to think about the
opportunities and risks of using the cloud in the education field, and the lessons we can learn from the
previous uses of this technology in the education field. In order to gain the benefits of the cloud to be used
in educational system in KSA, a comprehensive study on scientific literatures in this paper. This paper also
presents the significant information such as the findings, the case studies, related frameworks and
supporting also the tools associated to the migration of organizational resources to cloud
Stream Processing Environmental Applications in Jordan ValleyCSCJournals
Database system architectures have been gone through innovative changes, specially the unifications of algorithms and data via the integration of programming languages with the database system. Such an innovative changes is needed in Stream-based applications since they have different requirements for the principal of stream data processing system. For example, the monitoring component requires query processing system to detect user-defined events in a timely manner as in real time monitoring system. Furthermore, stream processing fits a large class of new applications for which conventional DBMSs fall short since many stream-oriented systems are inherently geographically distributed and the distribution offers a scalable load management and higher availability. This paper presents statistical information about metrological data such as the weather, soil and evapotranspiration as collected by the weather stations distributed in different locations in Jordan Valley. In addition, it shows the importance of Stream Processing in some real life applications, and shows how the database systems can help researcher in building prototypes that can be implemented and used in a continuous monitoring system.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
Load balancing in public cloud by division of cloud based on the geographical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The Method of Repeated Application of a Quadrature Formula of Trapezoids and ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
CFD Analysis and Fabrication of Aluminium Cenosphere CompositesIJMER
Metal matrix composites are engineered materials with a combination of two or more
dissimilar materials, to obtain enhanced properties. Aluminium alloys reinforced with ceramic
particles exhibit superior mechanical properties when compared to unreinforced aluminium alloys and
hence are candidate for engineering applications. In the present investigation aluminium alloy is used
as the matrix and cenosphere as the reinforcing material. The hybrid metal matrix composite is
produced using conventional foundry techniques by casting route. The cenosphere is to be added in
2%, 4% and 6% by volume and also with the influence of the particle size of cenosphere, to the molten
metal with Magnesium, which is the main parameter for the wet ability of cenosphere and aluminium
alloy. The hybrid composite is to be tested for hardness, density, mechanical properties and impact
strength. The density decreases with increase in cenosphere content. The impact strength increases
with increase in cenosphere content. The resistances to dry wear and slurry erosive wear increases
with increase in cenosphere content and hence this material can be used as bearing material. This
composite material being less dense than aluminium can therefore be used in place of conventional
aluminium alloys in aircraft components
Abstract: In this paper, we define and study about a new type of generalized closed set called, g∗s-closed set.Its relationship with already defined generalized closed sets are also studied
A Novel Acknowledgement based Intrusion Detection System for MANETsIJMER
In Mobile Ad Hoc Networks(MANETs), a set of interacting nodes should cooperatively
implement the routing functions to enable end-to-end communication along dynamic paths composed by
multi-hop wireless links. Several multi-hop routing protocols have been proposed for ad hoc networks,
and most popular ones include: Dynamic Source Routing (DSR), Optimized Link-State Routing (OLSR),
Ad Hoc On-Demand Distance Vector (AODV) and Destination- Sequenced Distance-Vector (DSDV).
Most of these protocols rely on the assumption of a trustworthy cooperation among all participating
nodes; unfortunately, this may not be a realistic assumption in real hosts. Malicious hosts could exploit
the weakness of MANET to launch various kinds of attacks. Node mobility on ad hoc network cannot be
restricted. As results, many Intrusion Detection System(IDS) solutions have been proposed for the wired
network, which they are defined on strategic points such as switches, gateways, and routers, can not be
implemented on the MANET. Thus, the wired network IDS characteristics must be modified prior to be
implemented in the ad hoc network. Thus an IDS should be added to enhance the security level of
MANETs. If MANET can detect the attackers as soon as they enter the network, we will be able to
completely eliminate the potential vulnerabilities caused by compromised nodes at the first time. IDSs
usually act as the second layer in MANETs. This paper presents an novel IDS for MANETs which is
based on acknowledgements.
A helicopter is an aircraft that is lifted and propelled by one or more horizontal rotors, each
consisting of two or more rotor blades. The main objective of this seminar topic is to study the basic
concepts of helicopter aerodynamics. The forces acting on helicopter i.e. lift, drag, thrust and weight
are considered for developing analytic equations. The main topics that are discussed include blade
motions like blade flapping, feathering and lead-lag. The effect of stall on helicopter blade flapping is
studied and it was noticed that there is a sudden lift drop at this stall condition. It was also found that
dynamic stall occurs due to rapidly changing angle of attack, which inturn affect the air flow over the
airfoil. Blade flapping angle and induced angle of attack are the main parameters concerned with stall.
The theory behind blade element analysis has been inferred in detail. The importance of all these in the
present scenario are also taken into consideration
Experimental Investigation and Parametric Studies of Surface Roughness Analy...IJMER
The modern machining industries are focused on achieving high quality, in terms of part/component accuracy, surface finish, high production rate and increase in product life. Surface roughness of machined components has received serious attention of researchers for many years. It has
been an important design feature and quality measure in machining process. There are a large number of
parameters which affect the surface roughness. The typical controllable parameters for the CNC machines
include cutting tool variables, work piece material variables, cutting conditions etc. The desired output is
surface roughness, material removal rate, tool wear, etc. Optimization of machining parameters needs to
determine the most significant parameter for required output. Many techniques are used for optimization
of machining parameters including Taguchi, RSM and ANOVA approach to determine most significant
parameter. The present work is therefore in a direction to integrate effect of various parameters which affect
the surface roughness. This paper investigates the parameters affecting the surface roughness and / or
material removal rate with CNC turning process studied by researchers. It also discusses some other parameters such as cutting force and power consumption in different conditions
Visual Quality for both Images and Display of Systems by Visual Enhancement u...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
An Efficient PDP Scheme for Distributed Cloud StorageIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Analysis and Improved Operation of PEBB Based 5-Level Voltage Source Convert...IJMER
The paper presents the power-electronic devices are increasing in several applications, and
power-electronic building blocks (PEBBs) are a strategic concept to increase the reliability of the
power-electronic converters and to minimize their cost. Magnetic elements, such as zigzag
transformers, phase-shifted transformers (PST), or zero-sequence blocking transformers (ZSBT), are
used to interconnect the PEBBs. In this paper, by using 5-level voltage source converter the operation
of multi-pulse converters will be analyzed, describing the harmonic cancellation and minimization
techniques that could be used in these multi-pulse converters, focusing on the power-electronics flexible
ac transmission systems devices installed at the NYPA Marcy substation. In order to improve the
dynamic response of this system, the use of selective harmonic elimination modulation is proposed and
implemented
Complex test pattern generation for high speed fault diagnosis in Embedded SRAMIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Influence choice of the injection nodes of energy source on on-line losses of...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Design of Neural Network Controller for Active Vibration control of Cantileve...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
A Novel Method for Blocking Misbehaving Users over Anonymizing NetworksIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Nonlinear Transformation Based Detection And Directional Mean Filter to Remo...IJMER
In this paper, a novel two stage algorithm for the removal of random valued impulse noise
from the images is presented. In the first stage the noise pixels are detected by using an exponential
nonlinear function. The transformation of the pixels increases the gap between noisy and noise free
candidates which leads to an efficient detection. In the second stage, the directional differences between
the pixels in the four main directions are calculated. The mean values of the pixels which lie in the
direction of minimum difference are calculated and the noisy pixel values are replaced with the mean
value of the pixels lying in the direction of minimum difference. Experimental results show that proposed
method is superior to the conventional methods in peak signal to noise ratio.
Fatigue Performance in Grinding and Turning: An OverviewIJMER
This paper analysis the influence of Abrasive Flow Machining (AFM), Turning and Grinding on
fatigue performance of Fe250. Surface condition has a strong effect on fatigue life, and that most surfaces
produced by conventional manufacturing operations such as machining and forging have poor fatigue
behavior than polished surfaces commonly used for laboratory specimens. It is found that the surfaces
produced with different machining process and having the same surface roughness having different fatigue
performances. High –cycle fatigue data was obtained for Fe 250 using three types of machining process
viz, AFM, Turning and Grinding .S-N curve is plotted for the samples obtained with all the three process. It
was found that the samples produced with AFM having the highest fatigue life.
Partitioning based Approach for Load Balancing Public CloudIJERA Editor
Load Balancing Model Based on Cloud Partitioning for the Public Cloud environment has an important impact
on the performance of network load. A cloud computing system which does not use load balancing has
numerous drawbacks. Now-a-days the usage of internet and related resources has increased widely. Due to this
there is tremendous increase in workload. So there is uneven distribution of this workload which results in
server overloading and may crash. In such systems the resources are not optimally used. Due to this the
performance degrades and efficiency reduces. Cloud computing efficient and improves user satisfaction. This
project is a better load balance model for public cloud based on the cloud partitioning concept with a switch
mechanism to choose different strategies for different situations. The algorithm applies the game theory for load
balancing strategy to improve the efficiency in the public cloud environment.
Public Cloud Partition Using Load Status Evaluation and Cloud Division RulesIJSRD
with growth of cloud computing load balancing is important impact on performance. Cloud computing efficiency depends on good load balancer. Many type of situation occur that time cloud partitioning is done by load balancer. Different type of situation needed different type of strategies for public cloud portioning using load balancer.in this paper we work on, partition of public cloud using two type of situation first is load status evaluation and second is cloud division rules. Load status evaluation is measure in number of cloudlets arrives at datacenter and cloud divisions rules are based on cloudlet come from which geographical location. On the basis of geographical location we partition public cloud and improve performance of load balancing in cloud computing. We implement proposed system with help of cloudsim3.0 simulator.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of cloud applications and analyse the impact of resource management and
scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of cloud applications and analyse the impact of resource management and
scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying applications depending on user requirements. Applications of cloud have diverse composition, configuration and deployment requirements. Quantifying the performance of applications in Cloud computing environments is a challenging task. In this paper, we try to identify various parameters associated with performance of cloud applications and analyse the impact of resource management and scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of
Data Distribution Handling on Cloud for Deployment of Big Dataneirew J
Cloud computing is a new emerging model in the field of computer science. For varying workload Cloud
computing presents a large scale on demand infrastructure. The primary usage of clouds in practice is to
process massive amounts of data. Processing large datasets has become crucial in research and business
environments. The big challenges associated with processing large datasets is the vast infrastructure
required. Cloud computing provides vast infrastructure to store and process Big data. Vms can be
provisioned on demand in cloud to process the data by forming cluster of Vms . Map Reduce paradigm can
be used to process data wherein the mapper assign part of task to particular Vms in cluster and reducer
combines individual output from each Vms to produce final result. we have proposed an algorithm to
reduce the overall data distribution and processing time. We tested our solution in Cloud Analyst
Simulation environment wherein, we found that our proposed algorithm significantly reduces the overall
data processing time in cloud.
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Virtual Machine Migration and Allocation in Cloud Computing: A Reviewijtsrd
Cloud computing is an emerging computing technology that maintains computational resources on large data centers and accessed through internet, rather than on local computers. VM migration provides the capability to balance the load, system maintenance, etc. Virtualization technology gives power to cloud computing. The virtual machine migration techniques can be divided into two categories that is pre copy and post copy approach. The process to move running applications or VMs from one physical machine to another is known as VM migration. In migration process the processor state, storage, memory and network connection are moved from one host to another.. Two important performance metrics are downtime and total migration time that the users care about most, because these metrics deals with service degradation and the time during which the service is unavailable. This paper focus on the analysis of live VM migration Techniques in cloud computing. Khushbu Singh Chandel | Dr. Avinash Sharma "Virtual Machine Migration and Allocation in Cloud Computing: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29556.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-network/29556/virtual-machine-migration-and-allocation-in-cloud-computing-a-review/khushbu-singh-chandel
Similar to Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computing Architecture (20)
A Study on Translucent Concrete Product and Its Properties by Using Optical F...IJMER
- Translucent concrete is a concrete based material with light-transferring properties,
obtained due to embedded light optical elements like Optical fibers used in concrete. Light is conducted
through the concrete from one end to the other. This results into a certain light pattern on the other
surface, depending on the fiber structure. Optical fibers transmit light so effectively that there is
virtually no loss of light conducted through the fibers. This paper deals with the modeling of such
translucent or transparent concrete blocks and panel and their usage and also the advantages it brings
in the field. The main purpose is to use sunlight as a light source to reduce the power consumption of
illumination and to use the optical fiber to sense the stress of structures and also use this concrete as an
architectural purpose of the building
Developing Cost Effective Automation for Cotton Seed DelintingIJMER
A low cost automation system for removal of lint from cottonseed is to be designed and
developed. The setup consists of stainless steel drum with stirrer in which cottonseeds having lint is mixed
with concentrated sulphuric acid. So lint will get burn. This lint free cottonseed treated with lime water to
neutralize acidic nature. After water washing this cottonseeds are used for agriculter purpose
Study & Testing Of Bio-Composite Material Based On Munja FibreIJMER
The incorporation of natural fibres such as munja fiber composites has gained
increasing applications both in many areas of Engineering and Technology. The aim of this study is to
evaluate mechanical properties such as flexural and tensile properties of reinforced epoxy composites.
This is mainly due to their applicable benefits as they are light weight and offer low cost compared to
synthetic fibre composites. Munja fibres recently have been a substitute material in many weight-critical
applications in areas such as aerospace, automotive and other high demanding industrial sectors. In
this study, natural munja fibre composites and munja/fibreglass hybrid composites were fabricated by a
combination of hand lay-up and cold-press methods. A new variety in munja fibre is the present work
the main aim of the work is to extract the neat fibre and is characterized for its flexural characteristics.
The composites are fabricated by reinforcing untreated and treated fibre and are tested for their
mechanical, properties strictly as per ASTM procedures.
Hybrid Engine (Stirling Engine + IC Engine + Electric Motor)IJMER
Hybrid engine is a combination of Stirling engine, IC engine and Electric motor. All these 3 are
connected together to a single shaft. The power source of the Stirling engine will be a Solar Panel. The aim of
this is to run the automobile using a Hybrid engine
Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...IJMER
The present day technology demands eco-friendly developments. In this era the
composite material are playing a vital roal in different field of Engineering .The composite materials
are using as a principle materials. Nowaday the composite materials are utilizing as a important
component of engineering field .Where as the importance of the applications of composites is well
known, but thrust on the use of natural fibres in it for reinforcement has been given priority for some
times. But changing from synthetic fibres to natural fibres provides only half green-composites. A
partial green composite will be achieved if the matrix component is also eco-friendly. Keeping this in
view, a detailed literature surveyed has been carried out through various issues of the Journals
related to this field. The material systems used are sunnhemp fibres. Some epoxy and hardener has
been also added for stability and drying of the bio-composites. Various graphs and bar-charts are
super-imposed on each other for comparison among themselves and Graphs is plotted on MAT LAB
and ORIGIN 6.0 software. To determining tensile strengths, Various properties for different biocomposites
have been compared among themselves. Comparison of the behaviour of bio-composites of
this work has been also compare with other works. The bio-composites developed in this work are
likely to get applications in fall ceilings, partitions, bio-degradable packagings, automotive interiors,
sports things (e.g. rackets, nets, etc.), toys etc.
Geochemistry and Genesis of Kammatturu Iron Ores of Devagiri Formation, Sandu...IJMER
The Greenstone belts of Karnataka are enriched in BIFs in Dharwar craton, where Iron
formations are confined to the basin shelf, clearly separated from the deeper-water iron formation that
accumulated at the basin margin and flanking the marine basin. Geochemical data procured in terms of
major, trace and REE are plotted in various diagrams to interpret the genesis of BIFs. Al2O3, Fe2O3 (T),
TiO2, CaO, and SiO2 abundances and ratios show a wide variation. Ni, Co, Zr, Sc, V, Rb, Sr, U, Th,
ΣREE, La, Ce and Eu anomalies and their binary relationships indicate that wherever the terrigenous
component has increased, the concentration of elements of felsic such as Zr and Hf has gone up. Elevated
concentrations of Ni, Co and Sc are contributed by chlorite and other components characteristic of basic
volcanic debris. The data suggest that these formations were generated by chemical and clastic
sedimentary processes on a shallow shelf. During transgression, chemical precipitation took place at the
sediment-water interface, whereas at the time of regression. Iron ore formed with sedimentary structures
and textures in Kammatturu area, in a setting where the water column was oxygenated.
Experimental Investigation on Characteristic Study of the Carbon Steel C45 in...IJMER
In this paper, the mechanical characteristics of C45 medium carbon steel are investigated
under various working conditions. The main characteristic to be studied on this paper is impact toughness
of the material with different configurations and the experiment were carried out on charpy impact testing
equipment. This study reveals the ability of the material to absorb energy up to failure for various
specimen configurations under different heat treated conditions and the corresponding results were
compared with the analysis outcome
Non linear analysis of Robot Gun Support Structure using Equivalent Dynamic A...IJMER
Robot guns are being increasingly employed in automotive manufacturing to replace
risky jobs and also to increase productivity. Using a single robot for a single operation proves to be
expensive. Hence for cost optimization, multiple guns are mounted on a single robot and multiple
operations are performed. Robot Gun structure is an efficient way in which multiple welds can be done
simultaneously. However mounting several weld guns on a single structure induces a variety of
dynamic loads, especially during movement of the robot arm as it maneuvers to reach the weld
locations. The primary idea employed in this paper, is to model those dynamic loads as equivalent G
force loads in FEA. This approach will be on the conservative side, and will be saving time and
subsequently cost efficient. The approach of the paper is towards creating a standard operating
procedure when it comes to analysis of such structures, with emphasis on deploying various technical
aspects of FEA such as Non Linear Geometry, Multipoint Constraint Contact Algorithm, Multizone
meshing .
Static Analysis of Go-Kart Chassis by Analytical and Solid Works SimulationIJMER
This paper aims to do modelling, simulation and performing the static analysis of a go
kart chassis consisting of Circular beams. Modelling, simulations and analysis are performed using 3-D
modelling software i.e. Solid Works and ANSYS according to the rulebook provided by Indian Society of
New Era Engineers (ISNEE) for National Go Kart Championship (NGKC-14).The maximum deflection is
determined by performing static analysis. Computed results are then compared to analytical calculation,
where it is found that the location of maximum deflection agrees well with theoretical approximation but
varies on magnitude aspect.
In récent year various vehicle introduced in market but due to limitation in
carbon émission and BS Séries limitd speed availability vehicle in the market and causing of
environnent pollution over few year There is need to decrease dependancy on fuel vehicle.
bicycle is to be modified for optional in the future To implement new technique using change in
pedal assembly and variable speed gearbox such as planetary gear optimise speed of vehicle
with variable speed ratio.To increase the efficiency of bicycle for confortable drive and to
reduce torque appli éd on bicycle. we introduced epicyclic gear box in which transmission done
throgh Chain Drive (i.e. Sprocket )to rear wheel with help of Epicyclical gear Box to give
number of différent Speed during driving.To reduce torque requirent in the cycle with change in
the pedal mechanism
Integration of Struts & Spring & Hibernate for Enterprise ApplicationsIJMER
The proposal of this paper is to present Spring Framework which is widely used in
developing enterprise applications. Considering the current state where applications are developed using
the EJB model, Spring Framework assert that ordinary java beans(POJO) can be utilize with minimal
modifications. This modular framework can be used to develop the application faster and can reduce
complexity. This paper will highlight the design overview of Spring Framework along with its features that
have made the framework useful. The integration of multiple frameworks for an E-commerce system has
also been addressed in this paper. This paper also proposes structure for a website based on integration of
Spring, Hibernate and Struts Framework.
Microcontroller Based Automatic Sprinkler Irrigation SystemIJMER
Microcontroller based Automatic Sprinkler System is a new concept of using
intelligence power of embedded technology in the sprinkler irrigation work. Designed system replaces
the conventional manual work involved in sprinkler irrigation to automatic process. Using this system a
farmer is protected against adverse inhuman weather conditions, tedious work of changing over of
sprinkler water pipe lines & risk of accident due to high pressure in the water pipe line. Overall
sprinkler irrigation work is transformed in to a comfortableautomatic work. This system provides
flexibility & accuracy in respect of time set for the operation of a sprinkler water pipe lines. In present
work the author has designed and developed an automatic sprinkler irrigation system which is
controlled and monitored by a microcontroller interfaced with solenoid valves.
On some locally closed sets and spaces in Ideal Topological SpacesIJMER
In this paper we introduce and characterize some new generalized locally closed sets
known as
δ
ˆ
s-locally closed sets and spaces are known as
δ
ˆ
s-normal space and
δ
ˆ
s-connected space and
discussed some of their properties
Intrusion Detection and Forensics based on decision tree and Association rule...IJMER
This paper present an approach based on the combination of, two techniques using
decision tree and Association rule mining for Probe attack detection. This approach proves to be
better than the traditional approach of generating rules for fuzzy expert system by clustering methods.
Association rule mining for selecting the best attributes together and decision tree for identifying the
best parameters together to create the rules for fuzzy expert system. After that rules for fuzzy expert
system are generated using association rule mining and decision trees. Decision trees is generated for
dataset and to find the basic parameters for creating the membership functions of fuzzy inference
system. Membership functions are generated for the probe attack. Based on these rules we have
created the fuzzy inference system that is used as an input to neuro-fuzzy system. Fuzzy inference
system is loaded to neuro-fuzzy toolbox as an input and the final ANFIS structure is generated for
outcome of neuro-fuzzy approach. The experiments and evaluations of the proposed method were
done with NSL-KDD intrusion detection dataset. As the experimental results, the proposed approach
based on the combination of, two techniques using decision tree and Association rule mining
efficiently detected probe attacks. Experimental results shows better results for detecting intrusions as
compared to others existing methods
Natural Language Ambiguity and its Effect on Machine LearningIJMER
"Natural language processing" here refers to the use and ability of systems to process
sentences in a natural language such as English, rather than in a specialized artificial computer
language such as C++. The systems of real interest here are digital computers of the type we think of as
personal computers and mainframes. Of course humans can process natural languages, but for us the
question is whether digital computers can or ever will process natural languages. We have tried to
explore in depth and break down the types of ambiguities persistent throughout the natural languages
and provide an answer to the question “How it affects the machine translation process and thereby
machine learning as whole?” .
Today in era of software industry there is no perfect software framework available for
analysis and software development. Currently there are enormous number of software development
process exists which can be implemented to stabilize the process of developing a software system. But no
perfect system is recognized till yet which can help software developers for opting of best software
development process. This paper present the framework of skillful system combined with Likert scale. With
the help of Likert scale we define a rule based model and delegate some mass score to every process and
develop one tool name as MuxSet which will help the software developers to select an appropriate
development process that may enhance the probability of system success.
Material Parameter and Effect of Thermal Load on Functionally Graded CylindersIJMER
The present study investigates the creep in a thick-walled composite cylinders made
up of aluminum/aluminum alloy matrix and reinforced with silicon carbide particles. The distribution
of SiCp is assumed to be either uniform or decreasing linearly from the inner to the outer radius of
the cylinder. The creep behavior of the cylinder has been described by threshold stress based creep
law with a stress exponent of 5. The composite cylinders are subjected to internal pressure which is
applied gradually and steady state condition of stress is assumed. The creep parameters required to
be used in creep law, are extracted by conducting regression analysis on the available experimental
results. The mathematical models have been developed to describe steady state creep in the composite
cylinder by using von-Mises criterion. Regression analysis is used to obtain the creep parameters
required in the study. The basic equilibrium equation of the cylinder and other constitutive equations
have been solved to obtain creep stresses in the cylinder. The effect of varying particle size, particle
content and temperature on the stresses in the composite cylinder has been analyzed. The study
revealed that the stress distributions in the cylinder do not vary significantly for various combinations
of particle size, particle content and operating temperature except for slight variation observed for
varying particle content. Functionally Graded Materials (FGMs) emerged and led to the development
of superior heat resistant materials.
Energy Audit is the systematic process for finding out the energy conservation
opportunities in industrial processes. The project carried out studies on various energy conservation
measures application in areas like lighting, motors, compressors, transformer, ventilation system etc.
In this investigation, studied the technical aspects of the various measures along with its cost benefit
analysis.
Investigation found that major areas of energy conservation are-
1. Energy efficient lighting schemes.
2. Use of electronic ballast instead of copper ballast.
3. Use of wind ventilators for ventilation.
4. Use of VFD for compressor.
5. Transparent roofing sheets to reduce energy consumption.
So Energy Audit is the only perfect & analyzed way of meeting the Industrial Energy Conservation.
An Implementation of I2C Slave Interface using Verilog HDLIJMER
The focus of this paper is on implementation of Inter Integrated Circuit (I2C) protocol
following slave module for no data loss. In this paper, the principle and the operation of I2C bus protocol
will be introduced. It follows the I2C specification to provide device addressing, read/write operation and
an acknowledgement. The programmable nature of device provide users with the flexibility of configuring
the I2C slave device to any legal slave address to avoid the slave address collision on an I2C bus with
multiple slave devices. This paper demonstrates how I2C Master controller transmits and receives data to
and from the Slave with proper synchronization.
The module is designed in Verilog and simulated in ModelSim. The design is also synthesized in Xilinx
XST 14.1. This module acts as a slave for the microprocessor which can be customized for no data loss.
Discrete Model of Two Predators competing for One PreyIJMER
This paper investigates the dynamical behavior of a discrete model of one prey two
predator systems. The equilibrium points and their stability are analyzed. Time series plots are obtained
for different sets of parameter values. Also bifurcation diagrams are plotted to show dynamical behavior
of the system in selected range of growth parameter
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Quality defects in TMT Bars, Possible causes and Potential Solutions.
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computing Architecture
1. International
OPEN ACCESS Journal
Of Modern Engineering Research (IJMER)
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 40 |
Development of a Suitable Load Balancing Strategy In Case Of a
Cloud Computing Architecture
Dayananda RB1
, Prof. Dr. G. Manoj Someswar2
1
(Associate Professor, Department of CSE, RRIT, Bangalore – 90, Karnataka, India)
2
(Professor & HOD, Department of CSE & IT, Nawab Shah Alam Khan College of Engineering & Technology,
Affiliated to JNTU, Hyderabad, Malakpet, Hyderabad – 500024, India)
I. Introduction
The mysticism of cloud computing has worn off, leaving those required to implement cloud computing
directives with that valley-of-despair feeling. When the hype is skimmed from cloud computing—private,
public, or hybrid—what is left is a large, virtualized data center with IT control ranging from limited to non-
existent. In private cloud deployments, IT maintains a modicum of control, but as with all architectural choices,
that control is limited by the systems that comprise the cloud.
In a public cloud, not one stitch of cloud infrastructure is within the bounds of organizational control.
Hybrid implementations, of course, suffer both of these limitations in different ways. But what cloud computing
represents—the ability to shift loads rapidly across the Internet—is something large multi-national and even
large intra-national organizations mastered long before the term ―cloud‖ came along. While pundits like to refer
to cloud computing as revolutionary, from the technologists’ perspective, it is purely evolutionary. Cloud
resources and cloud balancing extend the benefits of global application delivery to the smallest of organizations.
In its most basic form, cloud balancing provides an organization with the ability to distribute application
requests across any number of application deployments located in data centers and through cloud-computing
providers.
Abstract: Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
Keywords: Load Balancing Strategy, User Interface Design, Requirements Traceability Matrix,
Application Execution, Database Staging, Effective Network Performance
2. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 41 |
Cloud balancing takes a broader view of application delivery and applies specified thresholds and
service level agreements (SLAs) to every request. The use of cloud balancing can result in the majority of users
being served by application deployments in the cloud providers’ environments, even though the local
application deployment or internal, private cloud might have more than enough capacity to serve that user. A
variant of cloud balancing called cloud bursting, which sends excess traffic to cloud implementations, is also
being implemented across the globe today. Cloud bursting delivers the benefits of cloud providers when usage is
high, without the expense when organizational data centers—including internal cloud deployments—can handle
the workload. In one vision of the future, the shifting of load is automated to enable organizations to configure
clouds and cloud balancing and then turn their attention to other issues, trusting that the infrastructure will
perform as designed.
II. Existing System
Cloud computing is efficient and scalable but maintaining the stability of processing so many jobs in
the cloud computing environment is a very complex problem with load balancing receiving much attention for
researchers. Since the job arrival pattern is not predictable and the capacities of each node in the cloud differ, for
load balancing problem, workload control is. crucial to improve system performance and maintain stability.
Load balancing schemes depending on whether the system dynamics are important can be either static and
dynamic. Static schemes do not use the system information and are less complex while dynamic schemes will
bring additional costs for the system but can change as the system status changes. A dynamic scheme is used
here for its flexibility.
Disadvantages of Existing System
Load balancing schemes depending on whether the system dynamics are important can be either static
and dynamic. Static schemes do not use the system information and are less complex.
III. Proposed System
Load balancing schemes depending on whether the system dynamics are important can be either static
and dynamic. Static schemes do not use the system information and are less complex while dynamic schemes
will bring additional costs for the system but can change as the system status changes. A dynamic scheme is
used here for its flexibility. The model has a main controller and balancers to gather and analyze the
information. Thus, the dynamic control has little influence on the other working nodes. The system status then
provides a basis for choosing the right load balancing strategy.
The load balancing model given in this research article is aimed at the public cloud which has
numerous nodes with distributed computing resources in many different geographic locations. Thus, this model
divides the public cloud into several cloud partitions. When the environment is very large and complex, these
divisions simplify the load balancing. The cloud has a main controller that chooses the suitable partitions for
arriving jobs while the balancer for each cloud partition chooses the best load balancing strategy.
Advantages of Proposed System
Load balancing schemes depending on whether the system dynamics are important can be either static
and dynamic . Static scheme does use the system information and are less complex.
IV. The Study Of The System
To conduct studies and analyses of an operational and technological nature, and to promote the
exchange and development of methods and tools for operational analysis as applied to defense problems.
Logical design
The logical design of a system pertains to an abstract representation of the data flows, inputs and
outputs of the system. This is often conducted via modeling, using an over-abstract (and sometimes graphical)
model of the actual system.
Physical design
The physical design relates to the actual input and output processes of the system. This is laid down in
terms of how data is input into a system, how it is verified / authenticated, how it is processed, and how it is
displayed as output. In Physical design, following requirements about the system are decided.
1. Input requirement,
2. Output requirements,
3. Storage requirements,
4. Processing Requirements,
3. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 42 |
5. System control and backup or recovery.
Put another way, the physical portion of systems design can generally be broken down into three sub-tasks:
1. User Interface Design
2. Data Design
3. Process Design
User Interface Design is concerned with how users add information to the system and with how the
system presents information back to them. Data Design is concerned with how the data is represented and stored
within the system. Finally, Process Design is concerned with how data moves through the system, and with how
and where it is validated, secured and/or transformed as it flows into, through and out of the system. At the end
of the systems design phase, documentation describing the three sub-tasks is produced and made available for
use in the next phase. Physical design, in this context, does not refer to the tangible physical design of an
information system. To use an analogy, a personal computer's physical design involves input via a keyboard,
processing within the CPU, and output via a monitor, printer, etc. It would not concern the actual layout of the
tangible hardware, which for a PC would be a monitor, CPU, motherboard, hard drive, modems, video/graphics
cards, USB slots, etc. It involves a detailed design of a user and a product database structure processor and a
control processor. The H/S personal specification is developed for the proposed system.
V. Input & Output Representation
Input Design
The input design is the link between the information system and the user. It comprises the developing
specification and procedures for data preparation and those steps are necessary to put transaction data in to a
usable form for processing can be achieved by inspecting the computer to read data from a written or printed
document or it can occur by having people keying the data directly into the system. The design of input focuses
on controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra steps and
keeping the process simple. The input is designed in such a way so that it provides security and ease of use with
retaining the privacy. Input Design considered the following things:
What data should be given as input?
How the data should be arranged or coded?
The dialog to guide the operating personnel in providing input.
Methods for preparing input validations and steps to follow when error occur.
Objectives
Input Design is the process of converting a user-oriented description of the input into a computer-based
system. This design is important to avoid errors in the data input process and show the correct direction to the
management for getting correct information from the computerized system. It is achieved by creating user-
friendly screens for the data entry to handle large volume of data. The goal of designing input is to make data
entry easier and to be free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.
When the data is entered it will check for its validity. Data can be entered with the help of screens.
Appropriate messages are provided as when needed so that the user will not be in maize of instant. Thus the
objective of input design is to create an input layout that is easy to follow.
Output Design
A quality output is one, which meets the requirements of the end user and presents the information
clearly. In any system results of processing are communicated to the users and to other system through outputs.
In output design it is determined how the information is to be displaced for immediate need and also the hard
copy output. It is the most important and direct source information to the user. Efficient and intelligent output
design improves the system’s relationship to help user decision-making.
a. Designing computer output should proceed in an organized, well thought out manner; the right output
must be developed while ensuring that each output element is designed so that people will find the system
can use easily and effectively. When analysis design computer output, they should Identify the specific
output that is needed to meet the requirements.
b. Select methods for presenting information.
c. Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following objectives.
Convey information about past activities, current status or projections of the future.
4. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 43 |
Signal important events, opportunities, problems, or warnings.
Trigger an action.
Confirm an action.
VI. Process Model Used With Justification
SDLC is nothing but Software Development Life Cycle. It is a standard which is used by software
industry to develop good software.
SDLC (Umbrella Model):
Stages of SDLC:
Requirement Gathering and Analysis
Designing
Coding
Testing
Deployment
Requirements Definition Stage and Analysis
The requirements gathering process takes as its input the goals identified in the high-level requirements
section of the project plan. Each goal will be refined into a set of one or more requirements. These requirements
define the major functions of the intended application, define operational data areas and reference data areas,
and define the initial data entities. Major functions include critical processes to be managed, as well as mission
critical inputs, outputs and reports. A user class hierarchy is developed and associated with these major
functions, data areas, and data entities. Each of these definitions is termed a Requirement. Requirements are
identified by unique requirement identifiers and, at minimum, contain a requirement title and textual description.
5. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 44 |
These requirements are fully described in the primary deliverables for this stage: the Requirements
Document and the Requirements Traceability Matrix (RTM). the requirements document contains complete
descriptions of each requirement, including diagrams and references to external documents as necessary. Note
that detailed listings of database tables and fields are not included in the requirements document. The title of
each requirement is also placed into the first version of the RTM, along with the title of each goal from the
project plan. The purpose of the RTM is to show that the product components developed during each stage of
the software development lifecycle are formally connected to the components developed in prior stages.
In the requirements stage, the RTM consists of a list of high-level requirements, or goals, by title, with
a listing of associated requirements for each goal, listed by requirement title. In this hierarchical listing, the
RTM shows that each requirement developed during this stage is formally linked to a specific product goal. In
this format, each requirement can be traced to a specific product goal, hence the term requirements traceability.
The outputs of the requirements definition stage include the requirements document, the RTM, and an updated
project plan.
Design Stage
The design stage takes as its initial input the requirements identified in the approved requirements
document. For each requirement, a set of one or more design elements will be produced as a result of interviews,
workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and
generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business
process diagrams, pseudo code, and a complete entity-relationship diagram with a full data dictionary. These
design elements are intended to describe the software in sufficient detail that skilled programmers may develop
the software with minimal additional input.
When the design document is finalized and accepted, the RTM is updated to show that each design
element is formally associated with a specific requirement. The outputs of the design stage are the design
document, an updated RTM, and an updated project plan.
6. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 45 |
Development Stage
The development stage takes as its primary input the design elements described in the approved design
document. For each design element, a set of one or more software artifacts will be produced. Software artifacts
include but are not limited to menus, dialogs, data management forms, data reporting formats, and specialized
procedures and functions. Appropriate test cases will be developed for each set of functionally related software
artifacts, and an online help system will be developed to guide users in their interactions with the software.
The RTM will be updated to show that each developed artifact is linked to a specific design element,
and that each developed artifact has one or more corresponding test case items. At this point, the RTM is in its
final configuration. The outputs of the development stage include a fully functional set of software that satisfies
the requirements and design elements previously documented, an online help system that describes the operation
of the software, an implementation map that identifies the primary code entry points for all major system
functions, a test plan that describes the test cases to be used to validate the correctness and completeness of the
software, an updated RTM, and an updated project plan.
Integration & Test Stage
During the integration and test stage, the software artifacts, online help, and test data are migrated from
the development environment to a separate test environment. At this point, all test cases are run to verify the
correctness and completeness of the software. Successful execution of the test suite confirms a robust and
complete migration capability. During this stage, reference data is finalized for production use and production
users are identified and linked to their appropriate roles. The final reference data (or links to reference data
source files) and production user list are compiled intothe Production Initiation Plan.
7. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 46 |
The outputs of the integration and test stage include an integrated set of software, an online help
system, an implementation map, a production initiation plan that describes reference data and production users,
an acceptance plan which contains the final suite of test cases, and an updated project plan.
Installation & Acceptance Stage
During the installation and acceptance stage, the software artifacts, online help, and initial production
data are loaded onto the production server. At this point, all test cases are run to verify the correctness and
completeness of the software. Successful execution of the test suite is a prerequisite to acceptance of the
software by the customer. After customer personnel have verified that the initial production data load is correct
and the test suite has been executed with satisfactory results, the customer formally accepts the delivery of the
software.
The primary outputs of the installation and acceptance stage include a production application, a
completed acceptance test suite, and a memorandum of customer acceptance of the software. Finally, the PDR
enters the last of the actual labor data into the project schedule and locks the project as a permanent project
record. At this point the PDR "locks" the project by archiving all software items, the implementation map, the
source code, and the documentation for future reference.
VII. System Architecture
Architecture Flow
Below architecture diagram represents mainly flow of request from the users to database through
servers. In this scenario overall system is designed in three tiers separately using three layers called presentation
layer, business layer, data link layer. This project was developed using 3-tier architecture.
8. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 47 |
3-Tier Architecture
The three-tier software architecture (a three layer architecture) emerged in the 1990s to overcome the
limitations of the two-tier architecture. The third tier (middle tier server) is between the user interface (client)
and the data management (server) components. This middle tier provides process management where business
logic and rules are executed and can accommodate hundreds of users (as compared to only 100 users with the
two tier architecture) by providing functions such as queuing, application execution, and database staging.
The three tier architecture is used when an effective distributed client/server design is needed that
provides (when compared to the two tier) increased performance, flexibility, maintainability, reusability, and
scalability, while hiding the complexity of distributed processing from the user. These characteristics have made
three layer architectures a popular choice for Internet applications and net-centric information systems
Advantages of Three-Tier
Separates functionality from presentation.
Clear separation - better understanding.
Changes limited to well define components.
Can be running on WWW.
Effective network performance.
System Design Introduction:
The System Design Document describes the system requirements, operating environment, system and
subsystem architecture, files and database design, input formats, output layouts, human-machine interfaces,
detailed design, processing logic, and external interfaces.
VIII. UML Diagrams
1. Global Use Case Diagrams:
Identification of actors:
Actor: Actor represents the role a user plays with respect to the system. An actor interacts with, but has no
control over the use cases.
Graphical representation:
Actor
An actor is someone or something that:
Interacts with or uses the system.
Provides input to and receives information from the system.
Is external to the system and has no control over the use cases.
Actors are discovered by examining:
Who directly uses the system?
Who is responsible for maintaining the system?
External hardware used by the system.
Other systems that need to interact with the system.
Questions to identify actors:
Who is using the system? Or, who is affected by the system? Or, which groups need help from the system
to perform a task?
Who affects the system? Or, which user groups are needed by the system to perform its functions? These
functions can be both main functions and secondary functions such as administration.
Which external hardware or systems (if any) use the system to perform tasks?
What problems does this application solve (that is, for whom)?
And, finally, how do users use the system (use case)? What are they doing with the system?
The actors identified in this system are:
a. System Administrator
b. Customer
c. Customer Care
<<Actor name>>
9. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 48 |
Identification of use cases:
Usecase: A use case can be described as a specific way of using the system from a user’s (actor’s) perspective.
IX. Graphical Representation
A more detailed description might characterize a use case as:
Pattern of behavior the system exhibits
A sequence of related transactions performed by an actor and the system
Delivering something of value to the actor
Use cases provide a means to:
capture system requirements
communicate with the end users and domain experts
test the system
Use cases are best discovered by examining the actors and defining what the actor will be able to do with the
system.
Guide lines for identifying use cases:
For each actor, find the tasks and functions that the actor should be able to perform or that the system
needs the actor to perform. The use case should represent a course of events that leads to clear goal
Name the use cases.
Describe the use cases briefly by applying terms with which the user is familiar.
This makes the description less ambiguous
Questions to identify use cases:
What are the tasks of each actor?
Will any actor create, store, change, remove or read information in the system?
What use case will store, change, remove or read this information?
Will any actor need to inform the system about sudden external changes?
Does any actor need to inform about certain occurrences in the system?
What use cases will support and maintains the system?
Flow of Events
A flow of events is a sequence of transactions (or events) performed by the system. They typically
contain very detailed information, written in terms of what the system should do, not how the system
accomplishes the task. Flow of events are created as separate files or documents in your favorite text editor and
then attached or linked to a use case using the Files tab of a model element.
A flow of events should include:
When and how the use case starts and ends
Use case/actor interactions
Data needed by the use case
Normal sequence of events for the use case
Alternate or exceptional flows
Construction of Use case diagrams:
Use-case diagrams graphically depict system behavior (use cases). These diagrams present a high level
view of how the system is used as viewed from an outsider’s (actor’s) perspective. A use-case diagram may
depict all or some of the use cases of a system.
A use-case diagram can contain:
actors ("things" outside the system)
use cases (system boundaries identifying what the system should do)
Interactions or relationships between actors and use cases in the system including the associations,
dependencies, and generalizations.
10. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 49 |
Relationships in use cases:
1. Communication:
The communication relationship of an actor in a use case is shown by connecting the actor symbol to
the use case symbol with a solid path. The actor is said to communicate with the use case.
2. Uses:
A Uses relationship between the use cases is shown by generalization arrow from the use case.
3. Extends:
The extend relationship is used when we have one use case that is similar to another use case but does a
bit more. In essence it is like subclass.
Main Use Case Diagram in our system:
Admin
Login
Upload Videos
verify videos
user
watch videos
X. Activity Diagram
Activity diagrams provide a way to model the workflow of a business process, code-specific
information such as a class operation. The transitions are implicitly triggered by completion of the actions in the
source activities. The main difference between activity diagrams and state charts is activity diagrams are
activity centric, while state charts are state centric. An activity diagram is typically used for modeling the
sequence of activities in a process, whereas a state chart is better suited to model the discrete stages of an
object’s lifetime. An activity represents the performance of task or duty in a workflow. It may also represent
the execution of a statement in a procedure. You can share activities between state machines. However,
transitions cannot be shared. An action is described as a "task" that takes place while inside a state or activity.
Actions on activities can occur at one of four times:
on entry ---- The "task" must be performed when the object enters the state or activity.
on exit ---- The "task" must be performed when the object exits the state or activity.
do ---- The "task" must be performed while in the state or activity and must continue until exiting
the state.
on event ---- The "task" triggers an action only if a specific event is received.
An end state represents a final or terminal state on an activity diagram or state chart diagram.
A start state (also called an "initial state") explicitly shows the beginning of a workflow on an activity
diagram.
Swim lanes can represent organizational units or roles within a business model. They are very similar to
an object. They are used to determine which unit is responsible for carrying out the specific activity. They
show ownership or responsibility. Transitions cross swim lanes
Synchronizations enable you to see a simultaneous workflow in an activity diagram Synchronizations
visually define forks and joins representing parallel workflow.
A fork construct is used to model a single flow of control that divides into two or more separate, but
simultaneous flows. A corresponding join should ideally accompany every fork that appears on an
activity diagram. A join consists of two of more flows of control that unite into a single flow of control.
11. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 50 |
All model elements (such as activities and states) that appear between a fork and join must complete
before the flow of controls can unite into one.
An object flow on an activity diagram represents the relationship between an activity and the object that
creates it (as an output) or uses it (as an input).
Admin Login
Upload videos
verify videos
user
through clouds
watch videos
Admin Check
XI. Sequence Diagrams
A sequence diagram is a graphical view of a scenario that shows object interaction in a time-based
sequence what happens first, what happens next. Sequence diagrams establish the roles of objects and help
provide essential information to determine class responsibilities and interfaces. There are two main differences
between sequence and collaboration diagrams: sequence diagrams show time-based object interaction while
collaboration diagrams show how objects associate with each other. A sequence diagram has two dimensions:
typically, vertical placement represents time and horizontal placement represents different objects.
Object:
An object has state, behavior, and identity. The structure and behavior of similar objects are defined in
their common class. Each object in a diagram indicates some instance of a class. An object that is not named is
referred to as a class instance. The object icon is similar to a class icon except that the name is underlined:An
object's concurrency is defined by the concurrency of its class.
Message:
A message is the communication carried between two objects that trigger an event. A message carries
information from the source focus of control to the destination focus of control. The synchronization of a
message can be modified through the message specification. Synchronization means a message where the
sending object pauses to wait for results.
Link:
A link should exist between two objects, including class utilities, only if there is a relationship between
their corresponding classes. The existence of a relationship between two classes symbolizes a path of
communication between instances of the classes: one object may send messages to another. The link is depicted
12. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 51 |
as a straight line between objects or objects and class instances in a collaboration diagram. If an object links to
itself, use the loop version of the icon.
Admin Sequence:
Admin Collaboration:
User Sequence:
user videosclouds
1 : cloud()
2 : watch()
13. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 52 |
User Collaboration:
XII. Class Diagram
Identification of analysis classes:
A class is a set of objects that share a common structure and common behavior (the same attributes,
operations, relationships and semantics). A class is an abstraction of real-world items.
There are 4 approaches for identifying classes:
a. Noun phrase approach:
b. Common class pattern approach.
c. Use case Driven Sequence or Collaboration approach.
d. Classes , Responsibilities and collaborators Approach
1. Noun Phrase Approach:
The guidelines for identifying the classes:
Look for nouns and noun phrases in the use cases.
Some classes are implicit or taken from general knowledge.
All classes must make sense in the application domain; Avoid computer
Implementation classes – defer them to the design stage.
Carefully choose and define the class names After identifying the classes we have to eliminate the
following types of classes:
Adjective classes.
2. Common class pattern approach:
The following are the patterns for finding the candidate classes:
Concept class.
Events class.
Organization class
Peoples class
Places class
Tangible things and devices class.
3. Use case driven approach:
We have to draw the sequence diagram or collaboration diagram. If there is need for some classes to
represent some functionality then add new classes which perform those functionalities.
XIII. CRC Approach
The process consists of the following steps:
Identify classes’ responsibilities ( and identify the classes )
Assign the responsibilities
Identify the collaborators.
Identification of responsibilities of each class:
The questions that should be answered to identify the attributes and methods of a class respectively are:
a. What information about an object should we keep track of?
b. What services must a class provide?
14. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 53 |
Identification of relationships among the classes:
Three types of relationships among the objects are:
Association: How objects are associated?
Super-sub structure: How are objects organized into super classes and sub classes?
Aggregation: What is the composition of the complex classes?
Association:
The questions that will help us to identify the associations are:
a. Is the class capable of fulfilling the required task by itself?
b. If not, what does it need?
c. From what other classes can it acquire what it needs?
Guidelines for identifying the tentative associations:
A dependency between two or more classes may be an association. Association often corresponds to a
verb or prepositional phrase.
A reference from one class to another is an association. Some associations are implicit or taken from
general knowledge.
Some common association patterns are:
Location association like part of, next to, contained in…..
Communication association like talk to, order to ……
We have to eliminate the unnecessary association like implementation associations, ternary or n-ary
associations and derived associations.
Super-sub class relationships:
Super-sub class hierarchy is a relationship between classes where one class is the parent class of
another class (derived class).This is based on inheritance.
Guidelines for identifying the super-sub relationship, a generalization are:
1. Top-down:
Look for noun phrases composed of various adjectives in a class name. Avoid excessive refinement.
Specialize only when the sub classes have significant behavior.
2. Bottom-up:
Look for classes with similar attributes or methods. Group them by moving the common attributes and methods
to an abstract class. You may have to alter the definitions a bit.
3. Reusability:
Move the attributes and methods as high as possible in the hierarchy.
4. Multiple inheritances:
Avoid excessive use of multiple inheritances. One way of getting benefits of multiple inheritances is to inherit
from the most appropriate class and add an object of another class as an attribute.
Aggregation or a-part-of relationship:
It represents the situation where a class consists of several component classes. A class that is composed
of other classes doesn’t behave like its parts. It behaves very difficultly. The major properties of this relationship
are transitivity and anti symmetry.
The questions whose answers will determine the distinction between the part and whole relationships
are:
Does the part class belong to the problem domain?
Is the part class within the system’s responsibilities?
Does the part class capture more than a single value?( If not then simply include it as an attribute of the
whole class)
Does it provide a useful abstraction in dealing with the problem domain?
There are three types of aggregation relationships. They are:
Assembly:
It is constructed from its parts and an assembly-part situation physically exists.
Container:
A physical whole encompasses but is not constructed from physical parts.
Collection member:
A conceptual whole encompasses parts that may be physical or conceptual. The container and collection are
represented by hollow diamonds but composition is represented by solid diamond.
15. Development of a suitable Load Balancing Strategy in case of a Cloud Computing Architecture
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 5| May. 2014 | 54 |
user
+id
+name
+user()
Cloud3
+file_name
+file
+category
+selectCloud()
Cloud3
+file_name
+file
+category
+selectCloud()
Cloud3
+file_name
+file
+category
+selectCloud()
server_status
+no
+server
+staus
XIV. Conclusion
Till now we have discussed on basic concepts of Cloud Computing and Load balancing. In addition to
that, the load balancing technique that is based on Swarm intelligence has been discussed. We have discussed
how the mobile agents can balance the load of a cloud using the concept of Ant colony Optimization. The
limitation of this technique is that it will be more efficient if we form cluster in our cloud. So, the research work
can be proceeded to implement the total solution of load balancing in a complete cloud environment. Our
objective for this paper is to develop an effective load balancing algorithm using Ant colony optimization
technique to maximize or minimize different performance parameters like CPU load, Memory capacity, Delay
or network load for the clouds of different sizes.
REFERENCES
[1] R. Hunter, The why of cloud, http://www.gartner.com/ DisplayDocument?doc cd=226469&ref= g noreg, 2012.
[2] M. D. Dikaiakos, D. Katsaros, P. Mehra, G. Pallis, and A. Vakali, Cloud computing: Distributed internet computing
for IT and scientific research, Internet Computing, vol.13, no.5, pp.10-13, Sept.-Oct. 2009.
[3] P. Mell and T. Grance, The NIST definition of cloud computing, http://csrc.nist.gov/ publications/nistpubs/800-
145/SP800-145.pdf, 2012.
[4] Microsoft Academic Research, Cloud computing, http:// libra.msra.cn/Keyword/6051/cloud-computing?query=
cloud%20computing, 2012.
[5] Google Trends, Cloud computing, http://www.google. com/trends/explore#q=cloud%20computing, 2012.
[6] N. G. Shivaratri, P. Krueger, and M. Singhal, Load distributing for locally distributed systems, Computer, vol. 25, no.
12, pp. 33-44, Dec. 1992.
[7] B. Adler, Load balancing in the cloud: Tools, tips and techniques, http://www.rightscale. com/info center/white-
papers/Load-Balancing-in-the-Cloud.pdf, 2012.