The document presented a seminar presentation on task scheduling approaches in fog computing. It provided a literature review of 15 papers on the topic, categorizing the papers based on proposed method, evaluation parameters used, and simulator. The proposed method for the presentation was a review paper. Key findings included categorizing optimization problems in fog computing based on taxonomy, classifying scheduling methods and algorithms, and identifying common evaluation metrics like latency, energy consumption, and simulators like IFogSim. Research gaps noted were further analysis on parameters like SLA penalty and priority of tasks.
IRJET- An Energy-Saving Task Scheduling Strategy based on Vacation Queuing & ...IRJET Journal
This document summarizes a research paper that proposes an energy-saving task scheduling strategy for cloud computing based on vacation queuing and optimization of resources. The proposed approach aims to minimize energy consumption, reduce processing time, and increase the number of sleeping nodes to make the system more efficient. It introduces a task scheduling algorithm that assigns tasks to computing nodes based on their properties using a load balancer. Simulation results show the proposed algorithm reduces energy consumption while meeting task performance compared to the vacation queuing algorithm. The document discusses related work on energy optimization techniques, presents the proposed approach, and analyzes results showing improvements in energy usage, time, and idle nodes.
The document discusses a methodology for evaluating the impact of databases and cloud patterns on the energy efficiency of cloud applications. It aims to measure the energy consumption and response time of applications implemented with MySQL, PostgreSQL, and MongoDB databases, both individually and combined with cloud patterns like Local Database Proxy, Local Sharding-Based Router, and Priority Message Queue. The methodology defines research questions, hypotheses, independent variables like choice of database/pattern, dependent variables like energy usage and response time. It also describes the data extraction process and analysis methods that will be used. The overall goal is to understand how databases and patterns affect energy efficiency and response time to help developers make informed design choices.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
The ACCESS-Optimization Project is a 3-year effort between NCI, BoM and Fujitsu to optimize and scale up climate and earth system models run on NCI infrastructure. The project aims to address performance and scalability issues, assist with future HPC procurements, and contribute to model development with a focus on performance. Current work involves profiling applications, constructing and testing higher resolution configurations, and reporting on workflow and scalability issues for future weather and climate applications. Methodologies used include tools for performance analysis and scaling tests. Areas of work include high resolution models of the ocean, atmosphere and coupled climate system, as well as data assimilation procedures. Deliverables to date include porting the ocean model to the new
This document presents a scalable approach to quantify availability in large-scale Infrastructure as a Service (IaaS) clouds. It models component failures using three pools - hot, warm, and cold. Dependencies between pools are resolved using fixed-point iteration. It compares analytic-numeric solutions from the proposed interacting Markov chain approach to monolithic models. The document also discusses optimizing data replication in clouds to minimize violations of applications' quality of service requirements. It formulates the problem as an integer program and proposes transforming it to a minimum-cost maximum-flow problem to find optimal solutions efficiently.
This document summarizes a project that aims to improve electric vehicle battery efficiency and longevity through accurate prediction of future power demands. It uses a k-nearest neighbor approach trained on driver history to predict power needs. This allows an optimal controller to intelligently manage a heterogeneous power store consisting of batteries and supercapacitors. The results showed 54.4% reduction in current squared compared to 40.2% for static policies, demonstrating that leveraging rich driver history data can enhance performance over time as more data is collected.
HSO: A Hybrid Swarm Optimization Algorithm for Reducing Energy Consumption in...TELKOMNIKA JOURNAL
Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made.
IRJET- An Energy-Saving Task Scheduling Strategy based on Vacation Queuing & ...IRJET Journal
This document summarizes a research paper that proposes an energy-saving task scheduling strategy for cloud computing based on vacation queuing and optimization of resources. The proposed approach aims to minimize energy consumption, reduce processing time, and increase the number of sleeping nodes to make the system more efficient. It introduces a task scheduling algorithm that assigns tasks to computing nodes based on their properties using a load balancer. Simulation results show the proposed algorithm reduces energy consumption while meeting task performance compared to the vacation queuing algorithm. The document discusses related work on energy optimization techniques, presents the proposed approach, and analyzes results showing improvements in energy usage, time, and idle nodes.
The document discusses a methodology for evaluating the impact of databases and cloud patterns on the energy efficiency of cloud applications. It aims to measure the energy consumption and response time of applications implemented with MySQL, PostgreSQL, and MongoDB databases, both individually and combined with cloud patterns like Local Database Proxy, Local Sharding-Based Router, and Priority Message Queue. The methodology defines research questions, hypotheses, independent variables like choice of database/pattern, dependent variables like energy usage and response time. It also describes the data extraction process and analysis methods that will be used. The overall goal is to understand how databases and patterns affect energy efficiency and response time to help developers make informed design choices.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
The ACCESS-Optimization Project is a 3-year effort between NCI, BoM and Fujitsu to optimize and scale up climate and earth system models run on NCI infrastructure. The project aims to address performance and scalability issues, assist with future HPC procurements, and contribute to model development with a focus on performance. Current work involves profiling applications, constructing and testing higher resolution configurations, and reporting on workflow and scalability issues for future weather and climate applications. Methodologies used include tools for performance analysis and scaling tests. Areas of work include high resolution models of the ocean, atmosphere and coupled climate system, as well as data assimilation procedures. Deliverables to date include porting the ocean model to the new
This document presents a scalable approach to quantify availability in large-scale Infrastructure as a Service (IaaS) clouds. It models component failures using three pools - hot, warm, and cold. Dependencies between pools are resolved using fixed-point iteration. It compares analytic-numeric solutions from the proposed interacting Markov chain approach to monolithic models. The document also discusses optimizing data replication in clouds to minimize violations of applications' quality of service requirements. It formulates the problem as an integer program and proposes transforming it to a minimum-cost maximum-flow problem to find optimal solutions efficiently.
This document summarizes a project that aims to improve electric vehicle battery efficiency and longevity through accurate prediction of future power demands. It uses a k-nearest neighbor approach trained on driver history to predict power needs. This allows an optimal controller to intelligently manage a heterogeneous power store consisting of batteries and supercapacitors. The results showed 54.4% reduction in current squared compared to 40.2% for static policies, demonstrating that leveraging rich driver history data can enhance performance over time as more data is collected.
HSO: A Hybrid Swarm Optimization Algorithm for Reducing Energy Consumption in...TELKOMNIKA JOURNAL
Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made.
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
This document discusses valuing demand response programs at Consolidated Edison Co. of New York (ConEd). It outlines ConEd's existing demand response programs and describes the methodology used to value these programs, including using avoided costs, accounting for risk and timing factors. It also discusses a marginal cost study conducted to inform the valuation and outlines future regulatory developments around energy efficiency and distributed energy resources that could impact demand response valuation.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
The document discusses resource allocation and scheduling (RAS) in cloud computing. It identifies five major topics in cloud RAS: locality-aware task scheduling, reliability-aware scheduling, energy-aware RAS, Software as a Service (SaaS) layer RAS, and workflow scheduling. These topics are classified into three parts: performance-based RAS, cost-based RAS, and performance- and cost-based RAS. Existing RAS policies and algorithms are discussed for each topic in terms of parameters like execution efficiency, cost-effectiveness, reliability, and resource utilization.
This document discusses emerging models of ad hoc networks, including sensor networks, mobile ad hoc networks (MANETs), vehicular ad hoc networks (VANETs), wireless mesh networks (WMNs), cognitive radio ad hoc networks (CRAHNs), ad hoc grids, sensor grids, vehicular grids, ad hoc clouds, and vehicular clouds. It outlines key properties and research issues for each type of network, such as resource discovery, scheduling, security, power efficiency, and interoperability.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
Energy Efficient Technologies for Virtualized Cloud Data Center: A Systematic...IRJET Journal
This document summarizes a systematic mapping study and literature review of 74 peer-reviewed articles on energy efficient technologies for virtualized cloud data centers. The study aims to evaluate approaches that optimize power consumption in virtualized data centers. A characterization framework was proposed to classify the studies based on generic attributes, contribution type and evaluation method, technological attributes, and quality management. The results showed that virtualization, consolidation, and workload scheduling are widely used techniques. Around 60% of studies contributed solutions and validation methods through experiments or theoretical models. Dynamic voltage and frequency scaling-enabled scheduling and dynamic server consolidation were identified as important methods for saving energy. The study also identified a need for standardized benchmarking to help research progress and bridge industry-academia gaps
This document summarizes a study conducted by Black & Veatch for SMUD to assess the impacts of distributed energy resources (DER) such as solar PV, energy efficiency, electric vehicles, and demand response on SMUD's distribution system. The study used several modeling techniques including dispersion analysis, power flow modeling, and regression analysis. Key findings included that over 12,000 transformers may need to be upgraded due to electric vehicles, and 26% of substations showed voltage violations from solar PV. The study recommends establishing consistent DER adoption and transformer upgrade thresholds, extending the analysis to the full transmission and distribution system, and incorporating results into SMUD's grid modernization plan.
IRJET- A Statistical Approach Towards Energy Saving in Cloud ComputingIRJET Journal
This document proposes a statistical approach to save energy in cloud computing through predictive monitoring and optimization techniques. It discusses using Gaussian process regression to predict infrastructure workload and then applying convex optimization to determine the optimal subset of physical machines needed. Virtual machines would be migrated to this subset and idle physical machines could then be powered off to reduce energy consumption while maintaining system performance. An evaluation using 29 days of Google trace data showed the potential for significant power savings without affecting quality of service.
Sida LEAP Training Lecture #3 and #4: Energy Supply and Emissions ModelingweADAPT
Eight lectures were delivered in 2021 as a series of webinars organized by SEI, with support from the Swedish International Development Cooperation agency (Sida). Delivered by Jason Veysey and Charlotte Wagner of SEI.
This presentation is for lectures #3 and 4: Energy Supply and Emissions Modeling.
Find out more about this course here: https://www.weadapt.org/knowledge-base/synergies-between-adaptation-and-mitigation/introductory-low-emissions-analysis-platform-leap-training-course-2021
Cognitive Technique for Software Defined Optical Network (SDON)CPqD
This document discusses cognitive techniques for software defined optical networks (SDONs). It proposes using a fuzzy C-means (FCM) cognitive algorithm to determine modulation formats for high-speed transponders based on quality of transmission requirements. The FCM algorithm is compared to a case-based reasoning approach. Simulation results show the FCM approach has over two orders of magnitude faster computation time while achieving 100% accurate classification. This demonstrates FCM is a promising cognitive technique for SDON control planes to enable fast, autonomous decision making.
Improving Resource Utilization in Cloud using Application Placement HeuristicsAtakanAral
Application placement is an important concept when providing software as a service in cloud environments. Because of the potential downtime cost of application migration, most of the time additional resource acquisition is preferred over migrating the applications residing in the virtual machines (VMs). This situation results in under-utilized resources. To overcome this problem static/dynamic estimations on the resource requirements of VMs and/or applications can be performed.
A simpler strategy is using heuristics during application placement process instead of naively applying greedy strategies like round-robin. In this paper, we propose a number of novel heuristics and compare them with round robin placement strategy and a few proposed placement heuristics in the literature to explore the performance of heuristics in application placement problem. Our focus is to better utilize the resources offered by the cloud environment and at the same time minimize the number of application migrations. Our results indicate that an application heuristic that relies on the difference between the maximum and minimum utilization rates of the resources not only outperforms other application placement approaches but also significantly improves the conventional approaches present in the literature.
faisal mushtaq - an enterprise cloud cost management frameworkDariia Seimova
The document discusses cost management challenges in the cloud for enterprises and provides recommendations to address them. It outlines a cost management framework including initial planning, operational visibility and forecasting, governance, automation and optimization. It also discusses automation tools that can help with cost management tasks like visibility, optimization and billing. Finally, it presents a case study of a retail customer migrating to Google Cloud Platform where cost management was not initially planned and provides recommendations to help address their challenges.
This document presents an experiment on the performance and exergy analysis of a solar parabolic dish concentrator system installed at Universal Medicap Limited in Vadodara, India. The objectives are to validate a previous performance analysis methodology, develop an exergy analysis methodology, and conduct experiments. Work completed includes developing an exergy analysis model and Excel sheet, collecting experimental data, and publishing a paper on the performance analysis methodology. Current work involves analyzing collected data, calculating theoretical and actual efficiencies, and minimizing differences to optimize performance.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
Optimized Resource Provisioning Method for Computational Gridijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
This document discusses valuing demand response programs at Consolidated Edison Co. of New York (ConEd). It outlines ConEd's existing demand response programs and describes the methodology used to value these programs, including using avoided costs, accounting for risk and timing factors. It also discusses a marginal cost study conducted to inform the valuation and outlines future regulatory developments around energy efficiency and distributed energy resources that could impact demand response valuation.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
The document discusses resource allocation and scheduling (RAS) in cloud computing. It identifies five major topics in cloud RAS: locality-aware task scheduling, reliability-aware scheduling, energy-aware RAS, Software as a Service (SaaS) layer RAS, and workflow scheduling. These topics are classified into three parts: performance-based RAS, cost-based RAS, and performance- and cost-based RAS. Existing RAS policies and algorithms are discussed for each topic in terms of parameters like execution efficiency, cost-effectiveness, reliability, and resource utilization.
This document discusses emerging models of ad hoc networks, including sensor networks, mobile ad hoc networks (MANETs), vehicular ad hoc networks (VANETs), wireless mesh networks (WMNs), cognitive radio ad hoc networks (CRAHNs), ad hoc grids, sensor grids, vehicular grids, ad hoc clouds, and vehicular clouds. It outlines key properties and research issues for each type of network, such as resource discovery, scheduling, security, power efficiency, and interoperability.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
Energy Efficient Technologies for Virtualized Cloud Data Center: A Systematic...IRJET Journal
This document summarizes a systematic mapping study and literature review of 74 peer-reviewed articles on energy efficient technologies for virtualized cloud data centers. The study aims to evaluate approaches that optimize power consumption in virtualized data centers. A characterization framework was proposed to classify the studies based on generic attributes, contribution type and evaluation method, technological attributes, and quality management. The results showed that virtualization, consolidation, and workload scheduling are widely used techniques. Around 60% of studies contributed solutions and validation methods through experiments or theoretical models. Dynamic voltage and frequency scaling-enabled scheduling and dynamic server consolidation were identified as important methods for saving energy. The study also identified a need for standardized benchmarking to help research progress and bridge industry-academia gaps
This document summarizes a study conducted by Black & Veatch for SMUD to assess the impacts of distributed energy resources (DER) such as solar PV, energy efficiency, electric vehicles, and demand response on SMUD's distribution system. The study used several modeling techniques including dispersion analysis, power flow modeling, and regression analysis. Key findings included that over 12,000 transformers may need to be upgraded due to electric vehicles, and 26% of substations showed voltage violations from solar PV. The study recommends establishing consistent DER adoption and transformer upgrade thresholds, extending the analysis to the full transmission and distribution system, and incorporating results into SMUD's grid modernization plan.
IRJET- A Statistical Approach Towards Energy Saving in Cloud ComputingIRJET Journal
This document proposes a statistical approach to save energy in cloud computing through predictive monitoring and optimization techniques. It discusses using Gaussian process regression to predict infrastructure workload and then applying convex optimization to determine the optimal subset of physical machines needed. Virtual machines would be migrated to this subset and idle physical machines could then be powered off to reduce energy consumption while maintaining system performance. An evaluation using 29 days of Google trace data showed the potential for significant power savings without affecting quality of service.
Sida LEAP Training Lecture #3 and #4: Energy Supply and Emissions ModelingweADAPT
Eight lectures were delivered in 2021 as a series of webinars organized by SEI, with support from the Swedish International Development Cooperation agency (Sida). Delivered by Jason Veysey and Charlotte Wagner of SEI.
This presentation is for lectures #3 and 4: Energy Supply and Emissions Modeling.
Find out more about this course here: https://www.weadapt.org/knowledge-base/synergies-between-adaptation-and-mitigation/introductory-low-emissions-analysis-platform-leap-training-course-2021
Cognitive Technique for Software Defined Optical Network (SDON)CPqD
This document discusses cognitive techniques for software defined optical networks (SDONs). It proposes using a fuzzy C-means (FCM) cognitive algorithm to determine modulation formats for high-speed transponders based on quality of transmission requirements. The FCM algorithm is compared to a case-based reasoning approach. Simulation results show the FCM approach has over two orders of magnitude faster computation time while achieving 100% accurate classification. This demonstrates FCM is a promising cognitive technique for SDON control planes to enable fast, autonomous decision making.
Improving Resource Utilization in Cloud using Application Placement HeuristicsAtakanAral
Application placement is an important concept when providing software as a service in cloud environments. Because of the potential downtime cost of application migration, most of the time additional resource acquisition is preferred over migrating the applications residing in the virtual machines (VMs). This situation results in under-utilized resources. To overcome this problem static/dynamic estimations on the resource requirements of VMs and/or applications can be performed.
A simpler strategy is using heuristics during application placement process instead of naively applying greedy strategies like round-robin. In this paper, we propose a number of novel heuristics and compare them with round robin placement strategy and a few proposed placement heuristics in the literature to explore the performance of heuristics in application placement problem. Our focus is to better utilize the resources offered by the cloud environment and at the same time minimize the number of application migrations. Our results indicate that an application heuristic that relies on the difference between the maximum and minimum utilization rates of the resources not only outperforms other application placement approaches but also significantly improves the conventional approaches present in the literature.
faisal mushtaq - an enterprise cloud cost management frameworkDariia Seimova
The document discusses cost management challenges in the cloud for enterprises and provides recommendations to address them. It outlines a cost management framework including initial planning, operational visibility and forecasting, governance, automation and optimization. It also discusses automation tools that can help with cost management tasks like visibility, optimization and billing. Finally, it presents a case study of a retail customer migrating to Google Cloud Platform where cost management was not initially planned and provides recommendations to help address their challenges.
This document presents an experiment on the performance and exergy analysis of a solar parabolic dish concentrator system installed at Universal Medicap Limited in Vadodara, India. The objectives are to validate a previous performance analysis methodology, develop an exergy analysis methodology, and conduct experiments. Work completed includes developing an exergy analysis model and Excel sheet, collecting experimental data, and publishing a paper on the performance analysis methodology. Current work involves analyzing collected data, calculating theoretical and actual efficiencies, and minimizing differences to optimize performance.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
Optimized Resource Provisioning Method for Computational Gridijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
Similar to Seminar_Presentation(Mar 2023).pptx (20)
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
2. CLOUD COMPUTING
• CLOUD COMPUTING IS A MODEL FOR ENABLING UBIQUITOUS, CONVENIENT, ON-DEMAND NETWORK ACCESS TO A SHARED POOL OF
CONFIGURABLE COMPUTING RESOURCES (E.G., NETWORKS, SERVERS, STORAGE, APPLICATIONS, AND SERVICES) THAT CAN BE RAPIDLY
PROVISIONED AND RELEASED WITH MINIMAL MANAGEMENT EFFORT OR SERVICE PROVIDER INTERACTION.
• TYPES OF CLOUD
• PUBLIC CLOUD (FOR GENERAL PUBLIC, GOOGLE CLOUD)
• PRIVATE CLOUD (INFRASTRUCTURE IS SOLELY OPERATED FOR AN ORGANIZATION. E.G. HYPER-V)
• HYBRID CLOUD
• SERVICE MODEL
• INFRASTRUCTURE AS A SERVICE( AWS)
• PLATFORM AS A SERVICE (FORCE.COM)
• SOFTWARE AS A SERVICE( GOOGLE SPREADSHEET)
3. LITERATURE SURVEY
Sno Authors Journal Year Title Proposed method Parameters Software
1 Aburukba et
al
Future
Generation
computer System
2020 Scheduling Internet of
Things requests to
minimize latency in hybrid
Fog–Cloud computing
Customized GA Latency Lingo
2 Abd Elaziz et al
Future Generation Computer Systems
2021 Advanced optimization
technique for scheduling
IoT tasks in cloud-fog
computing environments
AEOSSA( modified
artificial ecosystem-
based optimization
(AEO),modification is
done using salp swarm
optimization)
Makespan,
Throughput,
Performance
improvement Rate
MATLAB
R2018b
3 Boveiri et al. Journal of
Ambient
Intelligence and
Humanized
Computing
2019 An efficient Swarm-
Intelligence approach
for task scheduling
in cloud based internet
of things applications
Max- Min Ant system
(Modified Ant colony
optimization) for
scheduling of static
graphs
Makespan,
Priority,Normalized
schedule
length(makespan /
weight of ndes on
critical path)
MS Visual
Basic 6.0
4
Sun et.al Wireless Pers
Commun
2018 Multi-objective
Optimization of Resource
Scheduling in Fog
Computing Using an
multiobjective
optimization technique
NSGA-II is
implemented with
Service Latency,
Stability of task
execution
MATLAB
5 Nazir et al Conference
paper
2019 Cuckoo Optimization Algorithm
Based Job Scheduling Using Cloud and
Fog Computing in Smart Grid
Cuckoo Optimization
Algorithm to distribute
tasks
Load
Balancing,Response
time,processing
Cloud
Analyst
4. 6 Agarwal et
al
Soft Computing:
Theories and
Applications,
2019 A PSO Algorithm-Based Task
Scheduling in Cloud
Computing
Particle swarm
optimization
Execution time cloudsim
7 Tychalas et
al
Simulation
Modelling
Practice and
Theory
2020 A Scheduling Algorithm for a
Fog Computing System with
Bag-of-Tasks Jobs:
Simulation and Performance
Evaluation
Heuristic approach cost, response
time,load balancing
C
Programmin
g language
8
Keshavarz
nejad et al
Cluster
Computing
2021 Delay-aware optimization of
energy consumption for task
offloading in fog
environments using
NSGA-II, BEE
algorithm
Power consumption,
Delay
IFogSIm
9 Meng et al. IEEE Access 2017 Delay-Constrained Hybrid
Computation Offloading with
Cloud and Fog
Computing(delay, computation
energy efficiency
Computation energy efficiency (The
computation energy efficiency
(CEE) is
defined as the amount of the
computation tasks that are
offloaded by consuming a unit of
energy.) based cloud and fog
offloading method
Energy
consumption, Delay
Not
mentioned
10 Tavana et al Computers &
Industrial
Engineering
2018 A discrete cuckoo
optimization algorithm for
consolidation in cloud
Discrete cuckoo
optimization
Energy,cost MATLAB
5. 11 Abbasi et al J Grid
Computing
2020 Workload Allocation in IoT-Fog-
Cloud Architecture Using a Multi-
Objective Genetic Algorithm
NSGA-II Delay, Energy
consumption
MATLAB
R2013a
12 Mohammad et al
IEEE
TRANSACTION
S ON MOBILE
COMPUTING,
2019 An Application Placement
Technique for Concurrent IoT
Applications in Edge and Fog
Computing Environments
Memetic algo Energy
consumption,
execution time
IfogSim
13 Jafari et al. Journal of
Ambient
Intelligence and
Humanized
Computing
2021` Joint optimization of energy
consumption and time delay
in IoT‑fog‑cloud computing
environments using NSGA‑II
metaheuristic algorithm
NSGA-II and BA with
minimax differential
evolution approach
Energy
consumption,
Response time
Ifogsim,
SPSS
14 Singh et al ACM
Computing
Surveys
2022 Towards Metaheuristic Scheduling
Techniques in Cloud and Fog: An
Extensive Taxonomic Review
Review paper Review paper NA
15
Tychalas et alPCI 2020,
November
2020 An Advanced Weighted Round
Robin Scheduling Algorithm
Advanced weighted
Round robin
Load balance,
Response time,utility
C
Language,H
6. TASK SCHEDULING APPROACHES IN FOG COMPUTING: A COMPREHENSIVE REVIEW
• PROPOSED METHOD:
• REVIEW PAPER
• EVALUATION PARAMETER:
• TASK SCHEDULING
• .
7. • SCHEDULING METHODS BASED ON COMPUTATION METHOD CLASSIFIED INTO
• WORKFLOW SCHEDULING
• RESOURCE SCHEDULING
• TASK SCHEDULING
ON THE BASIS OF ARCHITECTURE
CENTRALIZED : SINGLE SCHEDULER MAKES DECISION FOR SCHEDULING IF TASKS(FAULT
TOLERANCE IS LOW)
DISTRIBUTED : SEVERAL SCHEDULERS TAKES SCHEDULING DECISIONS (HIGHLY SCALABLE,
COMPLEX)
TASK SCHEDULING ALGORITHMS ARE CLASSIFIED AS
STATIC
DYNAMIC : CLASSIFIED INTO ONLINE AND BATCH GROUP
HEURISITIC
,HYBRID
8. CLASSIFICATION OF OPTIMIZATION PROBLEMS IN FOG COMPUTING
• PROPOSED METHOD:
• REVIEW PAPER
• EVALUATION PARAMETER:
• TAXONOMY OF OPTIMIZATION PROBLEM IN FOG COMPUTING
• .
Taxonomy of
optimization
problems
All three
layers(Cloud ,
fog, end devices)
End Devices and
Fog nodes
Fog nodes only
Fog nodes and
cloud
9. TAXONOMY OF OPTIMIZATION PROBLEMS IN FOG
COMPUTING
• MODEL BASED
• HEURISTIC
• META HEURISTIC
• METRICS
• DELAY NETWORK METRIC(MOSTLY CONSIDERED)
• ENERGY CONSUMPTION(ONLY FEW STUDIES)
• SIMULATOR
• IFOGSIM
10. METRICS USED IN OPTIMIZATION
• TECHNIQUES USED:
• HEURISTIC
• META HEURISTIC
• METRICS
• RESPONSE TIME, ENERGY CONSUMPTION, COST,
• LATENCY
• SIMULATOR
• IFOGSIM Static Dynamic Hybrid
metircs
11. TASK OFFLOADING
• TRANSFER COMPUTE INTENSIVE TASKS FROM RESOURCE LIMITED IOT DEVICES TO RESOURCE RICH COMPUTING NODES
• TECHNIQUES USED:
• MODEL BASED(MOSTLY USED)
• HEURISTIC(LESS USED)
• METRICS
• LATENCY
• ENERGY CONSUMPTION
• SIMULATOR
• MATLAB
Task
Offloading
Single Multiple
14. RESOURCE MANAGEMENT APPROACHES IN FOG COMPUTING: A COMPREHENSIVE REVIE
• FINDINGS:
• AUTHOR PRESENTED A SLR ON RESOURCE MANAGEMENT APPROACHES IN FOG COMPUTING IN TAXONOMY FORM
CATEGORIZED INTO APPLICATION PLACEMENT, RESOURCE SCHEDULING, TASK OFFLOADING, LOAD BALANCING,
RESOURCE ALLOCATION . AUTHOR REVIEWED ISSUES, APPROACH USED, METRICS USED AND SIMULATOR USED
FOR EVALUATION IN ALL SIX CATEGORIES.
• RESEARCH GAP: REVIEW OF PARAMETERS SLA PENALTY, PRIORITY OF TASKS COULD ALSO BE DONE.
15. TOWARDS METAHEURISTIC SCHEDULING TECHNIQUES IN CLOUD
AND FOG: AN EXTENSIVE TAXONOMIC REVIEW
• PROPOSED METHOD:
• REVIEW PAPER
• EVALUATION PARAMETER: METAHEURISTIC SCHEDULING TECHNIQUES
• SIMULATOR USED:
• LINGO
• RESULT:
• PROPOSED METHOD SHOWED BETTER PERFORMANCE THAN WAITED-FAIR QUEUING (WFQ), PRIORITY-STRICT QUEUING (PSQ), AND
ROUND ROBIN (RR) TECHNIQUES.
• FINDINGS:
• INTRODUCED MODIFIED GENETIC ALGORITHM TO OPTIMIZE TASK SCHEDULING IN HYBRID FOG CLOUD COMPUTING.
RESEARCHERS HAVE CONSIDERED TASK SCHEDULING OPTIMIZATION PROBLEM AS AN INTEGER PROGRAMMING PROBLEM
WITH OBJECTIVE TO REDUCE LATENCY AND CONSTRAINTS THAT EACH REQUEST IS ASSIGNED ONE RESOURCE AND
DEADLINE CRITERIA SHOULD BE MET.
• RESEARCH GAP:
• ANALYSIS IS DONE ONLY ON SMALL SIZE DATA AND PRE-EMPTION OF JOBS IS NOT CONSIDERED..
16. SCHEDULING INTERNET OF THINGS REQUESTS TO MINIMIZE LATENCY IN HYBRID
FOG-CLOUD COMPUTING
• PROPOSED METHOD:
• CUSTOMIZED GENETIC ALGORITHM TO SCHEDULE IOT TASKS IN CLOUD FOG ENVIRONMENT
• EVALUATION PARAMETER: LATENCY
• SIMULATOR USED:
• LINGO
• RESULT:
• PROPOSED METHOD SHOWED BETTER PERFORMANCE THAN WAITED-FAIR QUEUING (WFQ), PRIORITY-STRICT QUEUING (PSQ), AND
ROUND ROBIN (RR) TECHNIQUES.
• FINDINGS:
• INTRODUCED MODIFIED GENETIC ALGORITHM TO OPTIMIZE TASK SCHEDULING IN HYBRID FOG CLOUD COMPUTING.
RESEARCHERS HAVE CONSIDERED TASK SCHEDULING OPTIMIZATION PROBLEM AS AN INTEGER PROGRAMMING PROBLEM
WITH OBJECTIVE TO REDUCE LATENCY AND CONSTRAINTS THAT EACH REQUEST IS ASSIGNED ONE RESOURCE AND
DEADLINE CRITERIA SHOULD BE MET.
• RESEARCH GAP:
• ANALYSIS IS DONE ONLY ON SMALL SIZE DATA AND PRE-EMPTION OF JOBS IS NOT CONSIDERED..
17. A NATURE-INSPIRED-BASED MULTI-OBJECTIVE SERVICE PLACEMENT IN FOG COMPUTING
ENVIRONMENT
• PROPOSED METHOD:
• GENETIC ALGORITHM BASED ALGORITHM TO SOLVE APPLICATION PLACEMENT PROBLEM
• EVALUATION PARAMETER: MAKESPAN, ENERGY CONSUMPTION, COST
• SIMULATOR USED:
• YAFS (YET ANOTHER FOG SIMULATOR)
• RESULT:
• PROPOSED METHOD SHOWED BETTER PERFORMANCE THAN RANDOM PLACEMENT ALGORITHM.
• FINDINGS:
• AUTHOR PROPOSED A GENETIC ALGORITHM BASED ALGORITHM FOR PLACEMENT OF APPLICATIONS ON FOG NODES. TO FULLY AND
EFFICIENTLY UTILIZE THE RESOURCES APPLICATIONS ARE DIVIDED INTO INDEPENDENT SERVICES WHICH ARE THEN PLACED ON FOG
NODES TO ENSURE QUALITY OF SERVICES. PROBLEM IS EXPRESSED AS MULTIOBJECTIVE PROBLEM WITH MAKESPAN, ENERGY AND COST
WITH DIFFERENT WEIGHTS ARE OBJECTIVES AND CONSTRAINTS ARE BASED UPON DEADLINES . IT HAS BEEN OBSERVED AFTER
SIMULATION THAT GA BASED ALGORITHM OUTPERFORM RANDOM PLACEMENT ALGORITHM.
• RESEARCH GAP:
• ALGORITHM CAN BE TESTED WITH REAL DATA AND MORE QOS PARAMETER LIKE LATENCY CAN BE CONSIDERED FOR EVALUATION.
18. SCHEDULING INTERNET OF THINGS REQUESTS TO MINIMIZE LATENCY IN HYBRID
FOG-CLOUD COMPUTING
• PROPOSED METHOD:
• CUSTOMIZED GENETIC ALGORITHM TO SCHEDULE IOT TASKS IN CLOUD FOG ENVIRONMENT
• EVALUATION PARAMETER: EXECUTION TIME
• SIMULATOR USED:
• IFOGSIM
• RESULT:
• PROPOSED METHOD SHOWED BETTER PERFORMANCE THAN WAITED-FAIR QUEUING (WFQ), PRIORITY-STRICT QUEUING (PSQ), AND
ROUND ROBIN (RR) TECHNIQUES.
• FINDINGS:
• A GA BASED COST EFFICIENT SCHEDULING TECHNIQUE IS PROPOSED TO MAP APPLICATIONS MODULES TO VARIOUS
RESOURCES IN CLOUD FOG ENVIRONMENT WITH THE OBJECTIVE TO MINIMIZE EXECUTION TIME. MODULES WITH
COMPUTATION REQUIREMENT MORE THAN THRESHOLD VALUES ARE PASSED TO CLOUD. REMAINING MODULES ARE
PASSED TO GA AS INITIAL POPULATION. ONE POINT CROSSOVER AND SINGLE POINT MUTATION S USED.PROPOSED
SCHEDULING TECHNIQUES IS SIMULATED AND FOUND TO BE BETTER THAN GA AND RACE TECHNIQUES.
• ONLY ONE OBJECTIVE IS CONSIDERED FOR EVALUATION. ENERGY EFFICIENCY IS NOT EVALUATED.
19. MULTI-OBJECTIVE OPTIMIZATION OF RESOURCE SCHEDULING IN FOG COMPUTING USING AN IMPROVED
NSGA-II
• PROPOSED METHOD:
• TWO LEVEL RESOURCE SCHEDULING IS INVESTIGATED
• SCHEDULING AMONG FOG CLUSTERS AND SCHEDULING AMONG NODES WITHIN SAME CLUSTER
• IMPROVED NSGA-II TO SCHEDULE IOT TASKS AMONG FOG NODES IN SAME FOG CLUSTER.
• EVALUATION PARAMETER: SERVICE LATENCY, STABILITY(AS SOME FOG NODES ARE NOT RELIABLE)
• SIMULATOR USED:
• MATLAB
• RESULT:
• PROPOSED METHOD SHOWED BETTER PERFORMANCE THAN RANDOM( IT SELECTS ONE SOLUTION FOR RESOURCE SCHEDULING
RANDOMLY), FIRMM ( IT IS A FOG-BASED IOT RESOURCE MANAGEMENT MODEL AIMED AT SCHEDULING AND MANAGING
RESOURCES EFFICIENTLY AND IN TIME.)
• FINDINGS:
• INTRODUCED TWO LEVEL RESOURCE SCHEDULING APPROACH IS PRESENTED AS MODIFIED NSGA-II WITH AIM TO MINIMIZE
LATENCY AND TO ACHIEVE STABILITY. AUTHORS HAVE COMPARED IT WITH EXISTING RANDOM AND FIRMM SCHEDULING
TECHNIQUES AND ANALYSED THAT IN TERM OF AVERAGE LATENCY ALL THREE SCHEMES ARE EQUALLY EFFICIENT IF NUMBER OF
JOBS ARE LESS BUT PROPOSED SCHEME IS MORE EFFICIENT IF NUMBER OF TASKS ARE MORE. IN TERMS OF AVERAGE STABILITY
PROPOSED SCHEME DOMINANT OVER EXISTING SCHEMES
• RESEARCH GAP:
• COST AND ENERGY EFFICIENCY ARE NOT CONSIDERED.
21. OPTIMIZATION TECHNIQUES
GRADIENT VS NON GRADIENT BASED ALGORITHM
• GRADIENT BASED
• HILL CLIMBING
• NON GRADIENT BASED
• TRAJECTORY BASED( SIMULATED ANNEALING)
• POPULATION BASED(GENETIC ALGORITHM, PARTICLE SWARM OPTIMIZATION)
22. TASK SCHEDULING METRICS
• TASK SCHEDULING METRICS FOG PERFORMANCE HAS BEEN EVALUATED BY
SEVERAL PERFORMANCE METRICS.
THE MOST COMMONLY USED PERFORMANCE METRICS ARE:
RESOURCE UTILIZATION: RESOURCE UTILIZATION IS DETERMINED AS THE NUMBER
OF USED RESOURCES IN EXECUTING TASKS
RESPONSE TIME: THE RESPONSE TIME OF A TASK IS THE TIME INTERVAL AMONG
THIS TASK THAT IS ACHIEVED INTO THE SYSTEM UNTIL IT IS COMPLETED
COST: THE PAYMENTS OF A GIVEN TOTALITY OF MONEY TO SUGGEST THE
PERFORMANCE, WHICH IS REQUIRED TO DO IN THE FOGS
MAKESPAN: MAKESPAN IS USED TO APPROXIMATE THE LARGEST PART OF
COMPLETION TIME BY ANALYZING THE OVERTIME OF THE RECENT AFFAIR WHEN
ALL AFFAIRS ARE PLANNED .
23. RESEARCH OBJECTIVES
• TO UNDERSTAND RESOURCE MANAGEMENT PROBLEMS IN CLOUD/FOG
COMPUTING
• TO DO EXTENSIVE LITERATURE REVIEW
• TO PUBLISH RESEARCH PAPER
24. REFERENCES
Aburukba, R. O., AliKarrar, M., Landolsi, T., & El-Fakih, K. (2020). Scheduling Internet of Things requests to minimize latency
in hybrid Fog–Cloud computing. Future Generation Computer Systems, 111, 539-551.
Abd Elaziz, M., Abualigah, L., & Attiya, I. (2021). Advanced optimization technique for scheduling IoT tasks in cloud-fog
computing environments. Future Generation Computer Systems, 124, 142-154.
Boveiri, H. R., Khayami, R., Elhoseny, M., & Gunasekaran, M. (2019). An efficient Swarm-Intelligence approach for task
scheduling in cloud-based internet of things applications. Journal of Ambient Intelligence and Humanized
Computing, 10(9), 3469-3479.
Sun, Y., Lin, F., & Xu, H. (2018). Multi-objective optimization of resource scheduling in fog computing using an improved
NSGA-II. Wireless Personal Communications, 102(2), 1369-1385.
Nazir, S., Shafiq, S., Iqbal, Z., Zeeshan, M., Tariq, S., & Javaid, N. (2018, September). Cuckoo optimization algorithm-based
job scheduling using cloud and fog computing in smart grid. In International Conference on Intelligent Networking and
Collaborative Systems (pp. 34-46). Springer, Cham
Agarwal, M., & Srivastava, G. M. S. (2019). A PSO algorithm based task scheduling in cloud computing. International Journal
of Applied Metaheuristic Computing (IJAMC), 10(4), 1-17.
Tychalas, D., & Karatza, H. (2020). A scheduling algorithm for a fog computing system with bag-of-tasks jobs: Simulation
and performance evaluation. Simulation Modelling Practice and Theory, 98, 101982
Keshavarznejad, M., Rezvani, M. H., & Adabi, S. (2021) Delay-aware optimization of energy consumption for task offloading
in fog environments using metaheuristic algorithms. Cluster Computing, 24(3), 1825-1853.
Meng, X., Wang, W., & Zhang, Z. (2017). Delay-constrained hybrid computation offloading with cloud and fog
computing. IEEE Access, 5, 21355-21367.
25. TAVANA, M., SHAHDI-PASHAKI, S., TEYMOURIAN, E., SANTOS-ARTEAGA, F. J., & KOMAKI, M. (2018). A DISCRETE CUCKOO
OPTIMIZATION ALGORITHM FOR CONSOLIDATION IN CLOUD COMPUTING. COMPUTERS & INDUSTRIAL ENGINEERING, 115,
495-511.
ABBASI, M., MOHAMMADI PASAND, E., & KHOSRAVI, M. R. (2020). WORKLOAD ALLOCATION IN IOT-FOG-CLOUD
ARCHITECTURE USING A MULTI-OBJECTIVE GENETIC ALGORITHM. JOURNAL OF GRID COMPUTING, 18(1), 43-56.
GOUDARZI, M., WU, H., PALANISWAMI, M., & BUYYA, R. (2020). AN APPLICATION PLACEMENT TECHNIQUE FOR CONCURRENT
IOT APPLICATIONS IN EDGE AND FOG COMPUTING ENVIRONMENTS. IEEE TRANSACTIONS ON MOBILE COMPUTING, 20(4),
1298-1311.
JAFARI, V., & REZVANI, M. H. (2021). JOINT OPTIMIZATION OF ENERGY CONSUMPTION AND TIME DELAY IN IOT-FOG-CLOUD
COMPUTING ENVIRONMENTS USING NSGA-II METAHEURISTIC ALGORITHM. JOURNAL OF AMBIENT INTELLIGENCE AND
HUMANIZED COMPUTING, 1-24.
SINGH, R. M., AWASTHI, L. K., & SIKKA, G. (2022). TOWARDS METAHEURISTIC SCHEDULING TECHNIQUES IN CLOUD AND FOG:
AN EXTENSIVE TAXONOMIC REVIEW. ACM COMPUTING SURVEYS (CSUR), 55(3), 1-43.
TYCHALAS, D., & KARATZA, H. (2020, NOVEMBER). AN ADVANCED WEIGHTED ROUND ROBIN SCHEDULING ALGORITHM.
IN 24TH PAN-HELLENIC CONFERENCE ON INFORMATICS (PP. 188-191).