This document proposes a mechanism for distributing limited bandwidth among cloud computing users effectively. It divides users into three groups based on their network usage capacities. The groups were assigned different bandwidth allotments: administrators received 1000BaseT, medium users received 100BaseT, and normal users received 10BaseT. Simulations measured the network performance for each group in terms of throughput, response time, and utilization. The results showed the bandwidth was managed optimally, with each group achieving maximum cloud service usage within their allotted capacities.
A Novel Routing Strategy Towards Achieving Ultra-Low End-to-End Latency in 6G...IJCNCJournal
Compared to 5G, 6G networks will demand even more ambitious reduction in endto-end latency for packet communication. Recent attempts at breaking the barrier of end-to-end millisecond latencies have focused on re-engineering networks using a hybrid approach consisting of an optical-fiber based backbone network architecture coupled with high-speed wireless networks to connect end-devices to the backbone network. In our approach, a wide area network (WAN) is considered with a high-speed optical fiber grid network as its backbone. After messages from a source node enter the backbone network through a local wireless network, these are delivered very fast to an access point in the backbone network closest to the destination node, followed by its transfer to the local wireless network for delivery to the destination node. We propose a novel routing strategy which is based on distributing the messages in the network in such a way that the average queuing delay of the messages through the backbone network is minimized, and also the route discovery time at each router in the backbone network is drastically reduced. Also, multiple messages destined towards a particular destination router in the backbone network are packed together to form a mailbag, allowing further reductions in processing overheads at intermediate routers and pipelining of mailbag formation and route discovery operations in each router. The performance of the proposed approach green based on these ideas has been theoretically analyzed and then simulated using the ns-3 simulator. Our results show that the average end-to-end latency is less than 380 µs (with only 46-79 µs within the backbone network under varying traffic conditions) for a 1 KB packet size, when using a 500 Gbps optical fiber based backbone network laid over a 15 Km × 15 Km area, a 50 Mbps uplink channel from the source to the backbone network, and a 1 Gbps downlink channel from the backbone network to the destination. The significant reduction in end-to-end latency as compared to existing routing solutions clearly demonstrates the potential of our proposed routing strategy for meeting the ultra-low latency requirements of current 5G and future 6G networks, particularly for mobile edge computing (MEC) application scenarios.
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRESijcsit
Hybrid intra-data centre networks, with optical and electrical capabilities, are attracting research interest
in recent years. This is attributed to the emergence of new bandwidth greedy applications and novel
computing paradigms. A key decision to make in networks of this type is the selection and placement of
suitable flows for switching in circuit network. Here, we propose an efficient strategy for flow selection and
placement suitable for hybrid Intra-cloud data centre networks. We further present techniques for
investigating bottlenecks in a packet networks and for the selection of flows to switch in circuit network.
The bottleneck technique is verified on a Software Defined Network (SDN) testbed. We also implemented
the techniques presented here in a scalable simulation experiment to investigate the impact of flow
selection on network performance. Results obtained from scalable simulation experiment indicate a
considerable improvement on average throughput, lower configuration delay, and stability of offloaded
flows..
Energy Optimized Link Selection Algorithm for Mobile Cloud ComputingEswar Publications
Mobile cloud computing is the revolutionary distributed computing research area which consists of three different domains: cloud computing, wireless networks and mobile computing targeting to improve the task computational capabilities of the mobile devices in order to minimize the energy consumption. Heavy computations can be offloaded to the cloud to decrease energy consumption for the mobile device. In some mobile cloud applications, it has been more energy inefficient to use the cloud compared to the conventional computing conducted in the local device. Despite mobile cloud computing being a reliable idea, still faces several
problems for mobile phones such as storage, short battery life and so on. One of the most important concerns for mobile devices is low energy consumption. Different network links has different bandwidths to uplink and downlink task as well as data transmission from mobile to cloud or vice-versa. In this paper, a novel optimal link selection algorithm is proposed to minimize the mobile energy. In the first phase, all available networks are
scanned and then signal strength is calculated. All the calculated signals along with network locations are given
input to the optimal link selection algorithm. After the execution of link selection algorithm, an optimal network link is selected.
Improved quality of service-based cloud service ranking and recommendation modelTELKOMNIKA JOURNAL
One of the ongoing technologies which are used by large number of companies and users is cloud computing environment. This computing technology has proved that it provides certainly a different level of efficiency, security, privacy, flexibility and availability to its users. Cloud computing delivers on demand services to the users by using various service-based models. All these models work on utility-based computing such that users pay for their used services. Along with the various advantages of the cloud computing environment, it has its own limitations and problems such as efficient resource identification or discovery, security, task scheduling, compliance and sustainability. Among these resource identification and scheduling plays an important role because users always submits their jobs and expects responses in least possible time. Research is happening all around the world to optimize the response time, make span so as to reduce the burden on the cloud resources. In this paper, QoS based service ranking model is proposed for cloud computing environment to find the essential top ranked services. Proposed model is implemented in two phases. In the first phase, similarity computation between the users and their services is considered. In the second phase, computing the missing values based on the computed similarity measures is calculated. The efficiency of the proposed ranking is measured and the average precision correlation of the proposed ranking measure is showing better results than the existing measures.
Adaptive Offloading in Mobile Cloud Computing by automatic partitioning approach of tasks is the idea to augment execution through migrating heavy computation from mobile devices to resourceful cloud servers and then receive the results from them via wireless networks. Offloading is an effective way to
overcome the resources and functionalities constraints
of the mobile devices since it can release them from
intensive processing and increase performance of the
mobile applications, in terms of response time.
Offloading brings many potential benefits, such as
energy saving, performance improvement, reliability
improvement, ease for the software developers and
better exploitation of contextual information.
Parameters about method transitions, response times,
cost and energy consumptions are dynamically reestimated
at runtime during application executions.
Dynamic resource allocation for opportunistic software-defined IoT networks: s...IJECEIAES
Several wireless technologies have recently emerged to enable efficient and scalable Internet-of-Things (IoT) networking. Cognitive radio (CR) technology, enabled by software-defined radios, is considered one of the main IoT-enabling technologies that can provide opportunistic wireless access to a large number of connected IoT devices. An important challenge in this domain is how to dynamically enable IoT transmissions while achieving efficient spectrum usage with a minimum total power consumption under interference and traffic demand uncertainty. Toward this end, we propose a dynamic bandwidth/channel/power allocation algorithm that aims at maximizing the overall network’s throughput while selecting the set of power resulting in the minimum total transmission power. This problem can be formulated as a two-stage binary linear stochastic programming. Because the interference over different channels is a continuous random variable and noting that the interference statistics are highly correlated, a suboptimal sampling solution is proposed. Our proposed algorithm is an adaptive algorithm that is to be periodically conducted over time to consider the changes of the channel and interference conditions. Numerical results indicate that our proposed algorithm significantly increases the number of simultaneous IoT transmissions compared to a typical algorithm, and hence, the achieved throughput is improved.
A New Improved Storage Model of Wireless Devices using the CloudIJCNC
This paper focuses on the development of new storage model by using cloud computing for
mobile devises. The concept of cloud computing has been applied to mobile devices for improving the
existing model (battery time and data saving) of mobile devices. In the recent eras, different types of
cloud computing techniques have been used for improving the efficiency of mobile devices. The paper
has combined the calibration and current launch amount characteristics with the trial results for drop in
battery voltage. A mathematical equation has been derived for mote operation scenario. Through this
equation, the power provide by the power supply as well as the average time of battery can be measured.
A Novel Routing Strategy Towards Achieving Ultra-Low End-to-End Latency in 6G...IJCNCJournal
Compared to 5G, 6G networks will demand even more ambitious reduction in endto-end latency for packet communication. Recent attempts at breaking the barrier of end-to-end millisecond latencies have focused on re-engineering networks using a hybrid approach consisting of an optical-fiber based backbone network architecture coupled with high-speed wireless networks to connect end-devices to the backbone network. In our approach, a wide area network (WAN) is considered with a high-speed optical fiber grid network as its backbone. After messages from a source node enter the backbone network through a local wireless network, these are delivered very fast to an access point in the backbone network closest to the destination node, followed by its transfer to the local wireless network for delivery to the destination node. We propose a novel routing strategy which is based on distributing the messages in the network in such a way that the average queuing delay of the messages through the backbone network is minimized, and also the route discovery time at each router in the backbone network is drastically reduced. Also, multiple messages destined towards a particular destination router in the backbone network are packed together to form a mailbag, allowing further reductions in processing overheads at intermediate routers and pipelining of mailbag formation and route discovery operations in each router. The performance of the proposed approach green based on these ideas has been theoretically analyzed and then simulated using the ns-3 simulator. Our results show that the average end-to-end latency is less than 380 µs (with only 46-79 µs within the backbone network under varying traffic conditions) for a 1 KB packet size, when using a 500 Gbps optical fiber based backbone network laid over a 15 Km × 15 Km area, a 50 Mbps uplink channel from the source to the backbone network, and a 1 Gbps downlink channel from the backbone network to the destination. The significant reduction in end-to-end latency as compared to existing routing solutions clearly demonstrates the potential of our proposed routing strategy for meeting the ultra-low latency requirements of current 5G and future 6G networks, particularly for mobile edge computing (MEC) application scenarios.
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRESijcsit
Hybrid intra-data centre networks, with optical and electrical capabilities, are attracting research interest
in recent years. This is attributed to the emergence of new bandwidth greedy applications and novel
computing paradigms. A key decision to make in networks of this type is the selection and placement of
suitable flows for switching in circuit network. Here, we propose an efficient strategy for flow selection and
placement suitable for hybrid Intra-cloud data centre networks. We further present techniques for
investigating bottlenecks in a packet networks and for the selection of flows to switch in circuit network.
The bottleneck technique is verified on a Software Defined Network (SDN) testbed. We also implemented
the techniques presented here in a scalable simulation experiment to investigate the impact of flow
selection on network performance. Results obtained from scalable simulation experiment indicate a
considerable improvement on average throughput, lower configuration delay, and stability of offloaded
flows..
Energy Optimized Link Selection Algorithm for Mobile Cloud ComputingEswar Publications
Mobile cloud computing is the revolutionary distributed computing research area which consists of three different domains: cloud computing, wireless networks and mobile computing targeting to improve the task computational capabilities of the mobile devices in order to minimize the energy consumption. Heavy computations can be offloaded to the cloud to decrease energy consumption for the mobile device. In some mobile cloud applications, it has been more energy inefficient to use the cloud compared to the conventional computing conducted in the local device. Despite mobile cloud computing being a reliable idea, still faces several
problems for mobile phones such as storage, short battery life and so on. One of the most important concerns for mobile devices is low energy consumption. Different network links has different bandwidths to uplink and downlink task as well as data transmission from mobile to cloud or vice-versa. In this paper, a novel optimal link selection algorithm is proposed to minimize the mobile energy. In the first phase, all available networks are
scanned and then signal strength is calculated. All the calculated signals along with network locations are given
input to the optimal link selection algorithm. After the execution of link selection algorithm, an optimal network link is selected.
Improved quality of service-based cloud service ranking and recommendation modelTELKOMNIKA JOURNAL
One of the ongoing technologies which are used by large number of companies and users is cloud computing environment. This computing technology has proved that it provides certainly a different level of efficiency, security, privacy, flexibility and availability to its users. Cloud computing delivers on demand services to the users by using various service-based models. All these models work on utility-based computing such that users pay for their used services. Along with the various advantages of the cloud computing environment, it has its own limitations and problems such as efficient resource identification or discovery, security, task scheduling, compliance and sustainability. Among these resource identification and scheduling plays an important role because users always submits their jobs and expects responses in least possible time. Research is happening all around the world to optimize the response time, make span so as to reduce the burden on the cloud resources. In this paper, QoS based service ranking model is proposed for cloud computing environment to find the essential top ranked services. Proposed model is implemented in two phases. In the first phase, similarity computation between the users and their services is considered. In the second phase, computing the missing values based on the computed similarity measures is calculated. The efficiency of the proposed ranking is measured and the average precision correlation of the proposed ranking measure is showing better results than the existing measures.
Adaptive Offloading in Mobile Cloud Computing by automatic partitioning approach of tasks is the idea to augment execution through migrating heavy computation from mobile devices to resourceful cloud servers and then receive the results from them via wireless networks. Offloading is an effective way to
overcome the resources and functionalities constraints
of the mobile devices since it can release them from
intensive processing and increase performance of the
mobile applications, in terms of response time.
Offloading brings many potential benefits, such as
energy saving, performance improvement, reliability
improvement, ease for the software developers and
better exploitation of contextual information.
Parameters about method transitions, response times,
cost and energy consumptions are dynamically reestimated
at runtime during application executions.
Dynamic resource allocation for opportunistic software-defined IoT networks: s...IJECEIAES
Several wireless technologies have recently emerged to enable efficient and scalable Internet-of-Things (IoT) networking. Cognitive radio (CR) technology, enabled by software-defined radios, is considered one of the main IoT-enabling technologies that can provide opportunistic wireless access to a large number of connected IoT devices. An important challenge in this domain is how to dynamically enable IoT transmissions while achieving efficient spectrum usage with a minimum total power consumption under interference and traffic demand uncertainty. Toward this end, we propose a dynamic bandwidth/channel/power allocation algorithm that aims at maximizing the overall network’s throughput while selecting the set of power resulting in the minimum total transmission power. This problem can be formulated as a two-stage binary linear stochastic programming. Because the interference over different channels is a continuous random variable and noting that the interference statistics are highly correlated, a suboptimal sampling solution is proposed. Our proposed algorithm is an adaptive algorithm that is to be periodically conducted over time to consider the changes of the channel and interference conditions. Numerical results indicate that our proposed algorithm significantly increases the number of simultaneous IoT transmissions compared to a typical algorithm, and hence, the achieved throughput is improved.
A New Improved Storage Model of Wireless Devices using the CloudIJCNC
This paper focuses on the development of new storage model by using cloud computing for
mobile devises. The concept of cloud computing has been applied to mobile devices for improving the
existing model (battery time and data saving) of mobile devices. In the recent eras, different types of
cloud computing techniques have been used for improving the efficiency of mobile devices. The paper
has combined the calibration and current launch amount characteristics with the trial results for drop in
battery voltage. A mathematical equation has been derived for mote operation scenario. Through this
equation, the power provide by the power supply as well as the average time of battery can be measured.
MCCVA: A NEW APPROACH USING SVM AND KMEANS FOR LOAD BALANCING ON CLOUDijccsa
Nowadays, the demand of using resources, using services via the intranet system or on the Internet is rapidly growing. The respective problem coming is how to use these resources effectively in terms of time and quality. Therefore, the network QoS and its economy are people concerns, cloud computing was born in an inevitable trend. However, managing resources and scheduling tasks in virtualized data centres on the cloud are challenging tasks. Currently, there are a lot of Load Balancing algorithms applied in clouds and proposed by many authors, scholars, and experts. These existing methods are more about natural and heuristic, but the application of AI, or modern datamining technologies, in load balancing is not too popular due to the different characteristics of cloud. In this paper, we propose an algorithm to reduce the processing time (makespan) on cloud computing, helping the load balancing work more efficiency. Here, we use the SVM algorithm to classify the coming Requests, K - Mean to cluster the VMs in cloud, then the LB will allocate the requests into the VMs in the most reasonable way. In this way, request with the least processing time will be allocated to the VMs with the lowest usage. We name this new proposal as MCCVA - Makespan Classification & Clustering VM Algorithm. We have experimented and evaluated this algorithm in CloudSim, a cloud simulation environment, we obtained better results than some other wellknown algorithms. With this MCCVA, we can see the big potential of AI and datamining in Load Balancing, we can further develop LB with AI to achieve better and better results of QoS.
ENERGY EFFICIENT COMPUTING FOR SMART PHONES IN CLOUD ASSISTED ENVIRONMENTIJCNCJournal
In recent years, the employment of smart mobile phones has increased enormously and are concerned as an area of human life. Smartphones are capable to support immense range of complicated and intensive applications results shortened power capability and fewer performance. Mobile cloud computing is the newly rising paradigm integrates the features of cloud computing and mobile computing to beat the constraints of mobile devices. Mobile cloud computing employs computational offloading that migrates the computations from mobile devices to remote servers. In this paper, a novel model is proposed for dynamic task offloading to attain the energy optimization and better performance for mobile applications in the cloud environment. The paper proposed an optimum offloading algorithm by introducing new criteria such as benchmarking for offloading decision making. It also supports the concept of partitioning to divide the computing problem into various sub-problems. These sub-problems can be executed parallelly on mobile device and cloud. Performance evaluation results proved that the proposed model can reduce around 20% to 53% energy for low complexity problems and up to 98% for high complexity problems.
An Empirical study on Peer-to-Peer sharing of resources in Mobile Cloud Envi...IJECEIAES
The increase usage of mobile users with internet and interoperability among the cloud services intensifies the role of distributed environemtnt in today’s real world application. Modern technologies are important for building rich, scalable and interoperable applications. To meet the requirements of client,the cloud service provider should offer adequate infrastructure especially under heavy multi-client load.To provide solution for large scale requirements and to statisfy the mobile client from the critical situation like lacking with bandwidth,connectivity issues,service completion ratio, we present adhoc virtual cloud model for different scenarios that include single and multiple client configurations with various file sizes of various file formats for retrieving files in the mobile cloud environement.We evaluate the strategies with the socket and RMI implemented using java and identify the best model for real world applications. Performance evaluation is done with the results obtained and recommends that when sockets and RMI can be appropriately used in peer-to-peer environment when the mobile user cannot connect directly to the cloud services.
Dear Student,
DREAMWEB TECHNO SOLUTIONS is one of the Hardware Training and Software Development centre available in
Trichy. Pioneer in corporate training, DREAMWEB TECHNO SOLUTIONS provides training in all software
development and IT-related courses, such as Embedded Systems, VLSI, MATLAB, JAVA, J2EE, CIVIL,
Power Electronics, and Power Systems. It’s certified and experienced faculty members have the
competence to train students, provide consultancy to organizations, and develop strategic
solutions for clients by integrating existing and emerging technologies.
ADD: No:73/5, 3rd Floor, Sri Kamatchi Complex, Opp City Hospital, Salai Road, Trichy-18
Contact @ 7200021403/04
phone: 0431-4050403
Techniques to Minimize State Transfer Cost for Dynamic Execution Offloading I...IJERA Editor
The recent advancement in cloud computing in cloud computing is leading to and excessive growth of the mobile devices that can become powerful means for the information access and mobile applications. This introducing a latent technology called Mobile cloud computing. Smart phone device supports wide range of mobile applications which require high computational power, memory, storage and energy but these resources are limited in number so act as constraints in smart phone devices. With the integration of cloud computing and mobile applications it is possible to overcome these constraints by offloading the complex modules on cloud. These restrictions may be alleviated by computation offloading: sending heavy computations to resourceful servers and receiving the results from these servers. Many issues related to offloading have been investigated in the past decade.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
A classic information processing has been replaced by cloud computing in more studies where cloud computing becomes more popular and growing than other computing models. Cloud computing works for providing on-demand services for users. Reliability and energy consumption are two hot challenges and tradeoffs problem in the cloud computing environment that requires accurate attention and research. This paper proposes an Auto Resource Management (ARM) scheme to enhance reliability by reducing the Service Level Agreement (SLA) violation and reduce energy consumed by cloud computing servers. In this context, the ARM consists of three compounds, they are static/dynamic threshold, virtual machine selection policy, and short prediction resource utilization method. The Minimum Utilization Non-Negative (MUN) virtual machine selection policy and Rate of Change (RoC) dynamic threshold present in this paper. Also, a method of choosing a value as the static threshold is proposed. To improve ARM performance, the paper proposes a Short Prediction Resource Utilization (SPRU) that aims to improve the process of decision making by including the resources utilization of future time and the current time. The output results show that SPRU enhanced the decision-making process for managing cloud computing resources and reduced energy consumption and the SLA violation. The proposed scheme tested under real workload data over the CloudSim simulator.
A NURBS-optimized dRRM solution in a mono-channel condition for IEEE 802.11 e...IJECEIAES
Dynamic Radio Resource Management, RRM, is an essential design block in the functional architecture of any Wifi controller in IEEE 802.11 indoor dense enterprise Wlans. In a mono-channel condition, it helps tackle co-channel interference problem and enrich end-to-end Wifi clients experience. In this work, we present our dRRM solution: WLCx, and demonstrate its performance over related-work and vendor approaches. Our solution is built on a novel and realistic per-Beam coverage representation approach. Unlike the other RRM solutions, WLCx is dynamic: even the calculation system parameters are processed. This processing comes at price in terms of processing time. To overcome this limitation, we constructed and implemented a NURBS surface-based optimization to our RRM solution. Our NURBS optimized WLCx, N-WLCx, solution achieves almost 92:58% time reduction in comparison with basic WLCx. Furthermore, our optimization could easily be extended to enhance others, vendors and research, RRM solutions.
In recent years, mobile devices such as smart phones, tablets empowered with tremendous
technological advancements. Augmenting the computing capability to the distant cloud help us
to envision a new computing era named as mobile cloud computing (MCC). However, distant
cloud has several limitations such as communication delay and bandwidth which brings the idea
of proximate cloud of cloudlet. Cloudlet has distinct advantages and is free from several
limitations of distant cloud. However, limited resources of cloudlet negatively impact the
cloudlet performance with the increasing number of substantial users. Hence, cloudlet is a
viable solution to augment the mobile device task to the nearest small scale cloud known as
cloudlet. However, this cloudlet resource is finite which in some point appear as resource
scarcity problem. In this paper, we analyse the cloudlet resource scarcity problem on overall
performance in the cloudlet for mobile cloud computing. In addition, for empirical analysis, we
make some definitions, assumptions and research boundaries. Moreover, we experimentally
examine the finite resource impact on cloudlet overall performance. By, empirical analysis, we
explicitly establish the research gap and present cloudlet finite resource problem in mobile
cloud computing. In this paper, we propose a Performance Enhancement Framework of
Cloudlet (PEFC) which enhances the finite resource cloudlet performance. Our aim is to
increase the cloudlet performance with this limited cloudlet resource and make the better user
experience for the cloudlet user in mobile cloud computing.
A review on serverless architectures - function as a service (FaaS) in cloud ...TELKOMNIKA JOURNAL
Emergence of cloud computing as the inevitable IT computing paradigm, the perception of the compute reference model and building of services has evolved into new dimensions. Serverless computing is an execution model in which the cloud service provider dynamically manages the allocation of compute resources of the server. The consumer is billed for the actual volume of resources consumed by them, instead paying for the pre-purchased units of compute capacity. This model evolved as a way to achieve optimum cost, minimum configuration overheads, and increases the application's ability to scale in the cloud. The prospective of the serverless compute model is well conceived by the major cloud service providers and reflected in the adoption of serverless computing paradigm. This review paper presents a comprehensive study on serverless computing architecture and also extends an experimentation of the working principle of serverless computing reference model adapted by AWS Lambda. The various research avenues in serverless computing are identified and presented.
A Grouped System Architecture for Smart Grids Based AMI Communications Over LTE ijwmn
A smart grid based Advanced Metering Infrastructure (AMI), is a technology that enables the utilities to
monitor and control the electricity consumption through a set of various smart meters (SMs) connected via
a two way communication infrastructure. One of the key challenges for smart grids is how to connect a
large number of devices. On the other hand, 4G Long Term Evolution (LTE), the latest standard for mobile
communications, was developed to provide stable service performance and higher data rates for a large
number of mobile users. Therefore, LTE is considered a promising solution for wide area connectivity for
SMs. In this paper, a grouped hierarchal architecture for SMs communications over LTE is introduced.
Then, an efficient grouped scheduling technique is proposed for SMs transmissions over LTE. The
proposed architecture efficiently solves the overload problem due to AMI traffic and guarantees a full
monitoring and control for energy consumption. The results of our suggested solution showed that LTE can
serve better for smart grids based AMI with particular grouping and scheduling scheme. In addition, the
presented technique can able to be used in urban areas having high density of SMs.
MCCVA: A NEW APPROACH USING SVM AND KMEANS FOR LOAD BALANCING ON CLOUDijccsa
Nowadays, the demand of using resources, using services via the intranet system or on the Internet is rapidly growing. The respective problem coming is how to use these resources effectively in terms of time and quality. Therefore, the network QoS and its economy are people concerns, cloud computing was born in an inevitable trend. However, managing resources and scheduling tasks in virtualized data centres on the cloud are challenging tasks. Currently, there are a lot of Load Balancing algorithms applied in clouds and proposed by many authors, scholars, and experts. These existing methods are more about natural and heuristic, but the application of AI, or modern datamining technologies, in load balancing is not too popular due to the different characteristics of cloud. In this paper, we propose an algorithm to reduce the processing time (makespan) on cloud computing, helping the load balancing work more efficiency. Here, we use the SVM algorithm to classify the coming Requests, K - Mean to cluster the VMs in cloud, then the LB will allocate the requests into the VMs in the most reasonable way. In this way, request with the least processing time will be allocated to the VMs with the lowest usage. We name this new proposal as MCCVA - Makespan Classification & Clustering VM Algorithm. We have experimented and evaluated this algorithm in CloudSim, a cloud simulation environment, we obtained better results than some other wellknown algorithms. With this MCCVA, we can see the big potential of AI and datamining in Load Balancing, we can further develop LB with AI to achieve better and better results of QoS.
ENERGY EFFICIENT COMPUTING FOR SMART PHONES IN CLOUD ASSISTED ENVIRONMENTIJCNCJournal
In recent years, the employment of smart mobile phones has increased enormously and are concerned as an area of human life. Smartphones are capable to support immense range of complicated and intensive applications results shortened power capability and fewer performance. Mobile cloud computing is the newly rising paradigm integrates the features of cloud computing and mobile computing to beat the constraints of mobile devices. Mobile cloud computing employs computational offloading that migrates the computations from mobile devices to remote servers. In this paper, a novel model is proposed for dynamic task offloading to attain the energy optimization and better performance for mobile applications in the cloud environment. The paper proposed an optimum offloading algorithm by introducing new criteria such as benchmarking for offloading decision making. It also supports the concept of partitioning to divide the computing problem into various sub-problems. These sub-problems can be executed parallelly on mobile device and cloud. Performance evaluation results proved that the proposed model can reduce around 20% to 53% energy for low complexity problems and up to 98% for high complexity problems.
An Empirical study on Peer-to-Peer sharing of resources in Mobile Cloud Envi...IJECEIAES
The increase usage of mobile users with internet and interoperability among the cloud services intensifies the role of distributed environemtnt in today’s real world application. Modern technologies are important for building rich, scalable and interoperable applications. To meet the requirements of client,the cloud service provider should offer adequate infrastructure especially under heavy multi-client load.To provide solution for large scale requirements and to statisfy the mobile client from the critical situation like lacking with bandwidth,connectivity issues,service completion ratio, we present adhoc virtual cloud model for different scenarios that include single and multiple client configurations with various file sizes of various file formats for retrieving files in the mobile cloud environement.We evaluate the strategies with the socket and RMI implemented using java and identify the best model for real world applications. Performance evaluation is done with the results obtained and recommends that when sockets and RMI can be appropriately used in peer-to-peer environment when the mobile user cannot connect directly to the cloud services.
Dear Student,
DREAMWEB TECHNO SOLUTIONS is one of the Hardware Training and Software Development centre available in
Trichy. Pioneer in corporate training, DREAMWEB TECHNO SOLUTIONS provides training in all software
development and IT-related courses, such as Embedded Systems, VLSI, MATLAB, JAVA, J2EE, CIVIL,
Power Electronics, and Power Systems. It’s certified and experienced faculty members have the
competence to train students, provide consultancy to organizations, and develop strategic
solutions for clients by integrating existing and emerging technologies.
ADD: No:73/5, 3rd Floor, Sri Kamatchi Complex, Opp City Hospital, Salai Road, Trichy-18
Contact @ 7200021403/04
phone: 0431-4050403
Techniques to Minimize State Transfer Cost for Dynamic Execution Offloading I...IJERA Editor
The recent advancement in cloud computing in cloud computing is leading to and excessive growth of the mobile devices that can become powerful means for the information access and mobile applications. This introducing a latent technology called Mobile cloud computing. Smart phone device supports wide range of mobile applications which require high computational power, memory, storage and energy but these resources are limited in number so act as constraints in smart phone devices. With the integration of cloud computing and mobile applications it is possible to overcome these constraints by offloading the complex modules on cloud. These restrictions may be alleviated by computation offloading: sending heavy computations to resourceful servers and receiving the results from these servers. Many issues related to offloading have been investigated in the past decade.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
A classic information processing has been replaced by cloud computing in more studies where cloud computing becomes more popular and growing than other computing models. Cloud computing works for providing on-demand services for users. Reliability and energy consumption are two hot challenges and tradeoffs problem in the cloud computing environment that requires accurate attention and research. This paper proposes an Auto Resource Management (ARM) scheme to enhance reliability by reducing the Service Level Agreement (SLA) violation and reduce energy consumed by cloud computing servers. In this context, the ARM consists of three compounds, they are static/dynamic threshold, virtual machine selection policy, and short prediction resource utilization method. The Minimum Utilization Non-Negative (MUN) virtual machine selection policy and Rate of Change (RoC) dynamic threshold present in this paper. Also, a method of choosing a value as the static threshold is proposed. To improve ARM performance, the paper proposes a Short Prediction Resource Utilization (SPRU) that aims to improve the process of decision making by including the resources utilization of future time and the current time. The output results show that SPRU enhanced the decision-making process for managing cloud computing resources and reduced energy consumption and the SLA violation. The proposed scheme tested under real workload data over the CloudSim simulator.
A NURBS-optimized dRRM solution in a mono-channel condition for IEEE 802.11 e...IJECEIAES
Dynamic Radio Resource Management, RRM, is an essential design block in the functional architecture of any Wifi controller in IEEE 802.11 indoor dense enterprise Wlans. In a mono-channel condition, it helps tackle co-channel interference problem and enrich end-to-end Wifi clients experience. In this work, we present our dRRM solution: WLCx, and demonstrate its performance over related-work and vendor approaches. Our solution is built on a novel and realistic per-Beam coverage representation approach. Unlike the other RRM solutions, WLCx is dynamic: even the calculation system parameters are processed. This processing comes at price in terms of processing time. To overcome this limitation, we constructed and implemented a NURBS surface-based optimization to our RRM solution. Our NURBS optimized WLCx, N-WLCx, solution achieves almost 92:58% time reduction in comparison with basic WLCx. Furthermore, our optimization could easily be extended to enhance others, vendors and research, RRM solutions.
In recent years, mobile devices such as smart phones, tablets empowered with tremendous
technological advancements. Augmenting the computing capability to the distant cloud help us
to envision a new computing era named as mobile cloud computing (MCC). However, distant
cloud has several limitations such as communication delay and bandwidth which brings the idea
of proximate cloud of cloudlet. Cloudlet has distinct advantages and is free from several
limitations of distant cloud. However, limited resources of cloudlet negatively impact the
cloudlet performance with the increasing number of substantial users. Hence, cloudlet is a
viable solution to augment the mobile device task to the nearest small scale cloud known as
cloudlet. However, this cloudlet resource is finite which in some point appear as resource
scarcity problem. In this paper, we analyse the cloudlet resource scarcity problem on overall
performance in the cloudlet for mobile cloud computing. In addition, for empirical analysis, we
make some definitions, assumptions and research boundaries. Moreover, we experimentally
examine the finite resource impact on cloudlet overall performance. By, empirical analysis, we
explicitly establish the research gap and present cloudlet finite resource problem in mobile
cloud computing. In this paper, we propose a Performance Enhancement Framework of
Cloudlet (PEFC) which enhances the finite resource cloudlet performance. Our aim is to
increase the cloudlet performance with this limited cloudlet resource and make the better user
experience for the cloudlet user in mobile cloud computing.
A review on serverless architectures - function as a service (FaaS) in cloud ...TELKOMNIKA JOURNAL
Emergence of cloud computing as the inevitable IT computing paradigm, the perception of the compute reference model and building of services has evolved into new dimensions. Serverless computing is an execution model in which the cloud service provider dynamically manages the allocation of compute resources of the server. The consumer is billed for the actual volume of resources consumed by them, instead paying for the pre-purchased units of compute capacity. This model evolved as a way to achieve optimum cost, minimum configuration overheads, and increases the application's ability to scale in the cloud. The prospective of the serverless compute model is well conceived by the major cloud service providers and reflected in the adoption of serverless computing paradigm. This review paper presents a comprehensive study on serverless computing architecture and also extends an experimentation of the working principle of serverless computing reference model adapted by AWS Lambda. The various research avenues in serverless computing are identified and presented.
A Grouped System Architecture for Smart Grids Based AMI Communications Over LTE ijwmn
A smart grid based Advanced Metering Infrastructure (AMI), is a technology that enables the utilities to
monitor and control the electricity consumption through a set of various smart meters (SMs) connected via
a two way communication infrastructure. One of the key challenges for smart grids is how to connect a
large number of devices. On the other hand, 4G Long Term Evolution (LTE), the latest standard for mobile
communications, was developed to provide stable service performance and higher data rates for a large
number of mobile users. Therefore, LTE is considered a promising solution for wide area connectivity for
SMs. In this paper, a grouped hierarchal architecture for SMs communications over LTE is introduced.
Then, an efficient grouped scheduling technique is proposed for SMs transmissions over LTE. The
proposed architecture efficiently solves the overload problem due to AMI traffic and guarantees a full
monitoring and control for energy consumption. The results of our suggested solution showed that LTE can
serve better for smart grids based AMI with particular grouping and scheduling scheme. In addition, the
presented technique can able to be used in urban areas having high density of SMs.
IOSR Journal of Applied Physics (IOSR-JAP) is an open access international journal that provides rapid publication (within a month) of articles in all areas of physics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in applied physics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
In this paper we are study-ing about cloud computing, their types, need to use cloud computing. We also study the architecture of the mobile cloud computing. So we included new techniques for backup and restoring data from mobile to cloud. Here we proposed to apply some compres-sion technique while backup and restore data from Smartphone to cloud and cloud to the Smartphone.
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
Contemporary Energy Optimization for Mobile and Cloud Environmentijceronline
Cloud and mobile computing applications are increasing heavily in terms of usage. These two areas extending usability of systems. This review paper gives information about cloud and mobile applications in terms of resources they consume and the need of choosing variety of features for users from several locations and the evolutionary provisions for service provider and end users. Both the fields are combined to provide good functionality, efficiency and effectiveness with mobile phones. The enhancement by considering power consumption by means of resource constrained nature of devices, communication media and cost effectiveness. This paper discuss about the concepts related to power consumption, underlying protocols and the other performance issues
A Comparison of Cloud Execution Mechanisms Fog, Edge, and Clone Cloud Computing IJECEIAES
Cloud computing is a technology that was developed a decade ago to provide uninterrupted, scalable services to users and organizations. Cloud computing has also become an attractive feature for mobile users due to the limited features of mobile devices. The combination of cloud technologies with mobile technologies resulted in a new area of computing called mobile cloud computing. This combined technology is used to augment the resources existing in Smart devices. In recent times, Fog computing, Edge computing, and Clone Cloud computing techniques have become the latest trends after mobile cloud computing, which have all been developed to address the limitations in cloud computing. This paper reviews these recent technologies in detail and provides a comparative study of them. It also addresses the differences in these technologies and how each of them is effective for organizations and developers.
Implementing K-Out-Of-N Computing For Fault Tolerant Processing In Mobile and...IJERA Editor
Despite the advances in hardware for hand-held mobile devices, resource-intensive applications (e.g., video and imagestorage and processing or map-reduce type) still remain off bounds since they require large computation and storage capabilities.Recent research has attempted to address these issues by employing remote servers, such as clouds and peer mobile devices.For mobile devices deployed in dynamic networks (i.e., with frequent topology changes because of node failure/unavailability andmobility as in a mobile cloud), however, challenges of reliability and energy efficiency remain largely unaddressed. To the best of ourknowledge, we are the first to address these challenges in an integrated manner for both data storage and processing in mobilecloud, an approach we call k-out-of-n computing. In our solution, mobile devices successfully retrieve or process data, in the mostenergy-efficient way, as long as k out of n remote servers are accessible. Through a real system implementation we prove the feasibilityof our approach. Extensive simulations demonstrate the fault tolerance and energy efficiency performance of our framework in largerscale networks.
Secured Communication Model for Mobile Cloud Computingijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A detailed study of cloud computing is presented. Starting from its basics, the characteristics and different modalities
are dwelt upon. Apart from this, the pros and cons of cloud computing is also highlighted. Apart from this, service
models of cloud computing are lucidly highlighted.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Mission to Decommission: Importance of Decommissioning Products to Increase E...
C017221821
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 2, Ver. II (Mar – Apr. 2015), PP 18-21
www.iosrjournals.org
DOI: 10.9790/0661-17221821 www.iosrjournals.org 18 | Page
Bandwidth Management on Cloud Computing Network
Eng.Randa Ibrahim Mohammed Ibnouf, Dr.Amin Babiker A/Nabi Mustafa
Faculty of Postgraduate – Telecommunication Engineer –Al-Neelain University-Khartoum – Sudan
Abstract: To be able to manage the available bandwidth and distribute it among the Cloud Applications users
effectively is a very critical issue to avoid network congestion and network resources abuse. In this paper we
will explore a mechanism that enables us to distribute the bandwidth more effectively and in a smart way on
cloud service for CBS Company. The Cloud Application users were divided into three distinctive groups of
network consumption capacity, each capacity is determined as per actual work done and the bandwidth is
determined accordingly. We monitored the network performance for all the three groups, to make sure of the
quality of the network service. The results showed that we had succeeded through our mechanism to manage the
bandwidth in an ideal way that granted to us the maximum usage of the cloud service.
Key work: Bandwidth, Cloud computing, Response time, Network traffic, Monitoring
I. Introduction:
Nowadays, Cloud computing is so widely spread among businesses and most of the companies are
interested to take benefit of the cloud computing to minimize the capital investment and the ongoing running
expenses for their automation. Cloud computing brought in a wide spectrum of applications within the financial
reach of almost any company irrespective of its size. The problem with the cloud is the access to the cloud
through a network connection, because cloud is always on remote data Centers. Getting enough bandwidth to
access the cloud could be very expensive to avoid network congestion and slowness. If the cloud application is
optimized for the use on the cloud then the application will not be an obstacle for the high number of user
connecting to the cloud.[2]
In this paper , we managed to divide the limited bandwidth of the network accessing the cloud among
the different users as per each user group needs for traffic on the network to discharge his routine daily work.
We have carried out three simulations on three networks with different traffic capacity on average on a
specific database application. We divided the users into three groups in accordance with their actual data need
and we assigned the relevant bandwidth to each of them. We monitored and analyzed the network performance
through measuring its metrics:
Throughput, Response time and utilization, and we compared the metrics to make sure that our criteria
had rendered the expected results of equitable division of the bandwidth among the different networks
simulated.
Cloud computing defines as a model that helps enable ubiquitous, convenient, on-demand network
access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or service provider
interaction. This cloud model promotes availability and is composed of five essential characteristics, three
service models, and four deployment models we show theirs on figure below. [1] [3]
Figure 1:cloud models,service models and characteristics models
2. Bandwidth Management on Cloud Computing
DOI: 10.9790/0661-17221821 www.iosrjournals.org 19 | Page
To optimize bandwidth for users
In this case we are using Opnet simulation software for measuring performance of network such as
throughput, response time and utilization to discover potential bandwidth bottlenecks before permanently
putting applications and data into the cloud. It’s no longer about delivering an application that is great; it's about
whether that application can survive in the wild. You have to examine the maximum use of the cloud-based
application and network. [2]
Problem statement:
CBS is a company and want to save cost bandwidth and enhancement has applications resource by
applying cloud computing service. We have multiple users in CBS we need to know how to manage bandwidth
for the users by applying optimization techniques and monitoring our network.
II. Case study:
CBS Company
If your employees and your users can't access data fast enough, then the cloud will be nothing more than a pipe
dream. In CBS case, that meant re-architecting the network to distribute databases so data is quickly reachable
and data centers remain in synchronization.
The Proposed Solution:
By Appling some techniques helpful the administrator makes the network more efficient suchas:
Good apply polices to distribute bandwidth over users
Monitoring network for traffic
The adoption of cloud-based computing and applications promises to improve the agility, efficiency,
and cost effectiveness of IT operations required to provision, scale, and deliver applications to the enterprise.
However, as with other new technology trends, delivering applications from the cloud to the remote sites creates
additional challenges in application performance, availability, and security.[4]
III. Simulation analysis:
In a cloud, the bandwidth sharing between the huge numbers of user is a critical factor for the success
of deployment of any application on the cloud. To maximize the usage of the limited band width available to the
cloud, we suggest division of the band width equitably among the different users according to their data capacity
passing through the network to/from the cloud. In this context we divided the users on the cloud to three
categories according to their priority on the cloud:
1. Administrators
2. User with additional task to do
3. Normal users who do routine work
Accordingly, we simulated the network distribution to distribute the band width to user categories and
assigning different capacities to each of them, hence we assigned 1000BaseT to Category 1 due to the
importance of their work and the high priority they need to manage the whole cloud and the activity running on
it. Second Category we assigned them 100 BaseT due to their continuous work all through the day and sending
reports and returns to their managers. Category 3 we assigned them 10 BaseT to enable them run their routine
duty which does not need high capacity of data. Thus we succeeded to manage the available band width and
distribute it smartly between the different categories of user according to their actual monitored capacity in such
away to maximize the throughput of every user on the cloud.
IV. Results:
CBS is a company how need to make use of cloud computing applications, but they are concerned
about the network bandwidth and its cost and the different classes of users whose data usage is different and as
well their need to have a responsive application . Thus they are looking for a network management solution that
allocate the right bandwidth to each user as per his data capacity demand in uploading and downloading and
mean while the application remain equitable responsive without a noticed delay.
The experiment setup for their network as decided by our solution satisfied all their need and gave the relevant
network bandwidth to each users group.
Our findings are as follows:
The throughput was found lesser in the 1000 BaseT network that has large capacity, but increase a little bit
in case of the 100 BaseT network. The highest bandwidth was enjoyed by the 10 BaseT network. This
implies that everyone is taking the relevant bandwidth as dictated by his work nature.
3. Bandwidth Management on Cloud Computing
DOI: 10.9790/0661-17221821 www.iosrjournals.org 20 | Page
The response Time was uniform and stable for 1000BaseT network and this granted to them fast access to
the service whenever they want waste network the response was not so stable and even worse for the 10
BaseT network.
It's noticeable from the experiment outcomes that we had utilized the limited bandwidth very effectively
especially for 1000 BaseT users.
In conclusion we managed to use the limited bandwidth effectively and divide it up between the users in such a
way that preserve the network resources and utilize its resource as on demand.
Outputs:
Figure2: Response Time
Figure3: Utilization
4. Bandwidth Management on Cloud Computing
DOI: 10.9790/0661-17221821 www.iosrjournals.org 21 | Page
F
Figure4: Throughput
V. Conclusion:
From the results we’ve gathered, with our effort to distribute the bandwidth according to user priorities
and actual demand of data transfer to-from the cloud we had maximized the utilization of the available band
width smartly and efficiently. The Output achieved a good result to manage bandwidth on network ,and users
should have only used the capacity they need from the resources of the network.
References:
[1]. http://www.tomsitpro.com/cloud_bandwdith
[2]. http://sdu.ictp.it/lowbandwidth/
[3]. http://searchenterprisewan.techtarget.com
[4]. http://www.ithound.com