This document presents a novel approach for scheduling real-time tasks in a heterogeneous multicore processor using fuzzy logic techniques for micro-grid power management. It proposes using two fuzzy logic-based scheduling algorithms: 1) Assign priority to tasks based on their execution time and deadline. 2) Assign higher priority tasks to the high-performance core for execution and lower priority tasks to the low-performance cores. The goal is to increase throughput, improve CPU utilization, and reduce overall power consumption for micro-grid systems. The algorithms were evaluated using test cases with different task parameters in MATLAB, which showed improvements in performance and power reduction.
In the era of big data, even though we have large infrastructure, storage data varies in size,
formats, variety, volume and several platforms such as hadoop, cloud since we have problem associated
with an application how to process the data which is varying in size and format. Data varying in
application and resources available during run time is called dynamic workflow. Using large
infrastructure and huge amount of resources for the analysis of data is time consuming and waste of
resources, it’s better to use scheduling algorithm to analyse the given data set, for efficient execution of
data set without time consuming and evaluate which scheduling algorithm is best and suitable for the
given data set. We evaluate with different data set understand which is the most suitable algorithm for
analysis of data being efficient execution of data set and store the data after analysis
Efficient Resource Management Mechanism with Fault Tolerant Model for Computa...Editor IJCATR
Grid computing provides a framework and deployment environment that enables resource
sharing, accessing, aggregation and management. It allows resource and coordinated use of various
resources in dynamic, distributed virtual organization. The grid scheduling is responsible for resource
discovery, resource selection and job assignment over a decentralized heterogeneous system. In the
existing system, primary-backup approach is used for fault tolerance in a single environment. In this
approach, each task has a primary copy and backup copy on two different processors. For dependent
tasks, precedence constraint among tasks must be considered when scheduling backup copies and
overloading backups. Then, two algorithms have been developed to schedule backups of dependent and
independent tasks. The proposed work is to manage the resource failures in grid job scheduling. In this
method, data source and resource are integrated from different geographical environment. Faulttolerant
scheduling with primary backup approach is used to handle job failures in grid environment.
Impact of communication protocols is considered. Communication protocols such as Transmission
Control Protocol (TCP), User Datagram Protocol (UDP) which are used to distribute the message of
each task to grid resources.
NETWORK-AWARE DATA PREFETCHING OPTIMIZATION OF COMPUTATIONS IN A HETEROGENEOU...IJCNCJournal
Rapid development of diverse computer architectures and hardware accelerators caused that designing parallel systems faces new problems resulting from their heterogeneity. Our implementation of a parallel
system called KernelHive allows to efficiently run applications in a heterogeneous environment consisting
of multiple collections of nodes with different types of computing devices. The execution engine of the
system is open for optimizer implementations, focusing on various criteria. In this paper, we propose a new
optimizer for KernelHive, that utilizes distributed databases and performs data prefetching to optimize the
execution time of applications, which process large input data. Employing a versatile data management
scheme, which allows combining various distributed data providers, we propose using NoSQL databases
for our purposes. We support our solution with results of experiments with real executions of our OpenCL
implementation of a regular expression matching application in various hardware configurations.
Additionally, we propose a network-aware scheduling scheme for selecting hardware for the proposed
optimizer and present simulations that demonstrate its advantages.
(Paper) Task scheduling algorithm for multicore processor system for minimiz...Naoki Shibata
Shohei Gotoda, Naoki Shibata and Minoru Ito : "Task scheduling algorithm for multicore processor system for minimizing recovery time in case of single node fault," Proceedings of IEEE International Symposium on Cluster Computing and the Grid (CCGrid 2012), pp.260-267, DOI:10.1109/CCGrid.2012.23, May 15, 2012.
In this paper, we propose a task scheduling al-gorithm for a multicore processor system which reduces the
recovery time in case of a single fail-stop failure of a multicore
processor. Many of the recently developed processors have
multiple cores on a single die, so that one failure of a computing
node results in failure of many processors. In the case of a failure
of a multicore processor, all tasks which have been executed
on the failed multicore processor have to be recovered at once.
The proposed algorithm is based on an existing checkpointing
technique, and we assume that the state is saved when nodes
send results to the next node. If a series of computations that
depends on former results is executed on a single die, we need
to execute all parts of the series of computations again in
the case of failure of the processor. The proposed scheduling
algorithm tries not to concentrate tasks to processors on a die.
We designed our algorithm as a parallel algorithm that achieves
O(n) speedup where n is the number of processors. We evaluated
our method using simulations and experiments with four PCs.
We compared our method with existing scheduling method, and
in the simulation, the execution time including recovery time in
the case of a node failure is reduced by up to 50% while the
overhead in the case of no failure was a few percent in typical
scenarios.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERSNexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
In the era of big data, even though we have large infrastructure, storage data varies in size,
formats, variety, volume and several platforms such as hadoop, cloud since we have problem associated
with an application how to process the data which is varying in size and format. Data varying in
application and resources available during run time is called dynamic workflow. Using large
infrastructure and huge amount of resources for the analysis of data is time consuming and waste of
resources, it’s better to use scheduling algorithm to analyse the given data set, for efficient execution of
data set without time consuming and evaluate which scheduling algorithm is best and suitable for the
given data set. We evaluate with different data set understand which is the most suitable algorithm for
analysis of data being efficient execution of data set and store the data after analysis
Efficient Resource Management Mechanism with Fault Tolerant Model for Computa...Editor IJCATR
Grid computing provides a framework and deployment environment that enables resource
sharing, accessing, aggregation and management. It allows resource and coordinated use of various
resources in dynamic, distributed virtual organization. The grid scheduling is responsible for resource
discovery, resource selection and job assignment over a decentralized heterogeneous system. In the
existing system, primary-backup approach is used for fault tolerance in a single environment. In this
approach, each task has a primary copy and backup copy on two different processors. For dependent
tasks, precedence constraint among tasks must be considered when scheduling backup copies and
overloading backups. Then, two algorithms have been developed to schedule backups of dependent and
independent tasks. The proposed work is to manage the resource failures in grid job scheduling. In this
method, data source and resource are integrated from different geographical environment. Faulttolerant
scheduling with primary backup approach is used to handle job failures in grid environment.
Impact of communication protocols is considered. Communication protocols such as Transmission
Control Protocol (TCP), User Datagram Protocol (UDP) which are used to distribute the message of
each task to grid resources.
NETWORK-AWARE DATA PREFETCHING OPTIMIZATION OF COMPUTATIONS IN A HETEROGENEOU...IJCNCJournal
Rapid development of diverse computer architectures and hardware accelerators caused that designing parallel systems faces new problems resulting from their heterogeneity. Our implementation of a parallel
system called KernelHive allows to efficiently run applications in a heterogeneous environment consisting
of multiple collections of nodes with different types of computing devices. The execution engine of the
system is open for optimizer implementations, focusing on various criteria. In this paper, we propose a new
optimizer for KernelHive, that utilizes distributed databases and performs data prefetching to optimize the
execution time of applications, which process large input data. Employing a versatile data management
scheme, which allows combining various distributed data providers, we propose using NoSQL databases
for our purposes. We support our solution with results of experiments with real executions of our OpenCL
implementation of a regular expression matching application in various hardware configurations.
Additionally, we propose a network-aware scheduling scheme for selecting hardware for the proposed
optimizer and present simulations that demonstrate its advantages.
(Paper) Task scheduling algorithm for multicore processor system for minimiz...Naoki Shibata
Shohei Gotoda, Naoki Shibata and Minoru Ito : "Task scheduling algorithm for multicore processor system for minimizing recovery time in case of single node fault," Proceedings of IEEE International Symposium on Cluster Computing and the Grid (CCGrid 2012), pp.260-267, DOI:10.1109/CCGrid.2012.23, May 15, 2012.
In this paper, we propose a task scheduling al-gorithm for a multicore processor system which reduces the
recovery time in case of a single fail-stop failure of a multicore
processor. Many of the recently developed processors have
multiple cores on a single die, so that one failure of a computing
node results in failure of many processors. In the case of a failure
of a multicore processor, all tasks which have been executed
on the failed multicore processor have to be recovered at once.
The proposed algorithm is based on an existing checkpointing
technique, and we assume that the state is saved when nodes
send results to the next node. If a series of computations that
depends on former results is executed on a single die, we need
to execute all parts of the series of computations again in
the case of failure of the processor. The proposed scheduling
algorithm tries not to concentrate tasks to processors on a die.
We designed our algorithm as a parallel algorithm that achieves
O(n) speedup where n is the number of processors. We evaluated
our method using simulations and experiments with four PCs.
We compared our method with existing scheduling method, and
in the simulation, the execution time including recovery time in
the case of a node failure is reduced by up to 50% while the
overhead in the case of no failure was a few percent in typical
scenarios.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
ORCHESTRATING BULK DATA TRANSFERS ACROSS GEO-DISTRIBUTED DATACENTERSNexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
A Review - Synchronization Approaches to Digital systemsIJERA Editor
Synchronization is a prime requirement in the process of Digital systems. Wherein new devices are upcoming
towards providing higher service level, advanced distributed systems are been integrated onto a single platform
for higher service provision. However with the integration of large processing units, the distributed processing
needs a high level synchronization with minimum processing overhead. The issue of synchronization was
processed by various approaches. This paper outlines a brief review on the developments made in the field of
synchronization approach to digital system, under distributed mode operation.
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTINGcscpconf
Parallel computing systems compose task partitioning strategies in a true multiprocessing
manner. Such systems share the algorithm and processing unit as computing resources which
leads to highly inter process communications capabilities. The main part of the proposed
algorithm is resource management unit which performs task partitioning and co-scheduling .In
this paper, we present a technique for integrated task partitioning and co-scheduling on the
privately owned network. We focus on real-time and non preemptive systems. A large variety of
experiments have been conducted on the proposed algorithm using synthetic and real tasks.
Goal of computation model is to provide a realistic representation of the costs of programming
The results show the benefit of the task partitioning. The main characteristics of our method are
optimal scheduling and strong link between partitioning, scheduling and communication. Some
important models for task partitioning are also discussed in the paper. We target the algorithm
for task partitioning which improve the inter process communication between the tasks and use
the recourses of the system in the efficient manner. The proposed algorithm contributes the
inter-process communication cost minimization amongst the executing processes.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Hardware simulation for exponential blind equal throughput algorithm using sy...IJECEIAES
Scheduling mechanism is the process of allocating radio resources to User Equipment (UE) that transmits different flows at the same time. It is performed by the scheduling algorithm implemented in the Long Term Evolution base station, Evolved Node B. Normally, most of the proposed algorithms are not focusing on handling the real-time and non-real-time traffics simultaneously. Thus, UE with bad channel quality may starve due to no resources allocated for quite a long time. To solve the problems, Exponential Blind Equal Throughput (EXP-BET) algorithm is proposed. User with the highest priority metrics is allocated the resources firstly which is calculated using the EXP-BET metric equation. This study investigates the implementation of the EXP-BET scheduling algorithm on the FPGA platform. The metric equation of the EXP-BET is modelled and simulated using System Generator. This design has utilized only 10% of available resources on FPGA. Fixed numbers are used for all the input to the scheduler. The system verification is performed by simulating the hardware co-simulation for the metric value of the EXP-BET metric algorithm. The output from the hardware co-simulation showed that the metric values of EXP-BET produce similar results to the Simulink environment. Thus, the algorithm is ready for prototyping and Virtex-6 FPGA is chosen as the platform.
An optimized cost-based data allocation model for heterogeneous distributed ...IJECEIAES
Continuous attempts have been made to improve the flexibility and effectiveness of distributed computing systems. Extensive effort in the fields of connectivity technologies, network programs, high processing components, and storage helps to improvise results. However, concerns such as slowness in response, long execution time, and long completion time have been identified as stumbling blocks that hinder performance and require additional attention. These defects increased the total system cost and made the data allocation procedure for a geographically dispersed setup difficult. The load-based architectural model has been strengthened to improve data allocation performance. To do this, an abstract job model is employed, and a data query file containing input data is processed on a directed acyclic graph. The jobs are executed on the processing engine with the lowest execution cost, and the system's total cost is calculated. The total cost is computed by summing the costs of communication, computation, and network. The total cost of the system will be reduced using a Swarm intelligence algorithm. In heterogeneous distributed computing systems, the suggested approach attempts to reduce the system's total cost and improve data distribution. According to simulation results, the technique efficiently lowers total system cost and optimizes partitioned data allocation.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
A Review - Synchronization Approaches to Digital systemsIJERA Editor
Synchronization is a prime requirement in the process of Digital systems. Wherein new devices are upcoming
towards providing higher service level, advanced distributed systems are been integrated onto a single platform
for higher service provision. However with the integration of large processing units, the distributed processing
needs a high level synchronization with minimum processing overhead. The issue of synchronization was
processed by various approaches. This paper outlines a brief review on the developments made in the field of
synchronization approach to digital system, under distributed mode operation.
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTINGcscpconf
Parallel computing systems compose task partitioning strategies in a true multiprocessing
manner. Such systems share the algorithm and processing unit as computing resources which
leads to highly inter process communications capabilities. The main part of the proposed
algorithm is resource management unit which performs task partitioning and co-scheduling .In
this paper, we present a technique for integrated task partitioning and co-scheduling on the
privately owned network. We focus on real-time and non preemptive systems. A large variety of
experiments have been conducted on the proposed algorithm using synthetic and real tasks.
Goal of computation model is to provide a realistic representation of the costs of programming
The results show the benefit of the task partitioning. The main characteristics of our method are
optimal scheduling and strong link between partitioning, scheduling and communication. Some
important models for task partitioning are also discussed in the paper. We target the algorithm
for task partitioning which improve the inter process communication between the tasks and use
the recourses of the system in the efficient manner. The proposed algorithm contributes the
inter-process communication cost minimization amongst the executing processes.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Optimized Assignment of Independent Task for Improving Resources Performance ...
Similar to A Novel Approach in Scheduling Of the Real- Time Tasks In Heterogeneous Multicore Processor with Fuzzy Logic Technique For Micro-grid Power Management
Hardware simulation for exponential blind equal throughput algorithm using sy...IJECEIAES
Scheduling mechanism is the process of allocating radio resources to User Equipment (UE) that transmits different flows at the same time. It is performed by the scheduling algorithm implemented in the Long Term Evolution base station, Evolved Node B. Normally, most of the proposed algorithms are not focusing on handling the real-time and non-real-time traffics simultaneously. Thus, UE with bad channel quality may starve due to no resources allocated for quite a long time. To solve the problems, Exponential Blind Equal Throughput (EXP-BET) algorithm is proposed. User with the highest priority metrics is allocated the resources firstly which is calculated using the EXP-BET metric equation. This study investigates the implementation of the EXP-BET scheduling algorithm on the FPGA platform. The metric equation of the EXP-BET is modelled and simulated using System Generator. This design has utilized only 10% of available resources on FPGA. Fixed numbers are used for all the input to the scheduler. The system verification is performed by simulating the hardware co-simulation for the metric value of the EXP-BET metric algorithm. The output from the hardware co-simulation showed that the metric values of EXP-BET produce similar results to the Simulink environment. Thus, the algorithm is ready for prototyping and Virtex-6 FPGA is chosen as the platform.
An optimized cost-based data allocation model for heterogeneous distributed ...IJECEIAES
Continuous attempts have been made to improve the flexibility and effectiveness of distributed computing systems. Extensive effort in the fields of connectivity technologies, network programs, high processing components, and storage helps to improvise results. However, concerns such as slowness in response, long execution time, and long completion time have been identified as stumbling blocks that hinder performance and require additional attention. These defects increased the total system cost and made the data allocation procedure for a geographically dispersed setup difficult. The load-based architectural model has been strengthened to improve data allocation performance. To do this, an abstract job model is employed, and a data query file containing input data is processed on a directed acyclic graph. The jobs are executed on the processing engine with the lowest execution cost, and the system's total cost is calculated. The total cost is computed by summing the costs of communication, computation, and network. The total cost of the system will be reduced using a Swarm intelligence algorithm. In heterogeneous distributed computing systems, the suggested approach attempts to reduce the system's total cost and improve data distribution. According to simulation results, the technique efficiently lowers total system cost and optimizes partitioned data allocation.
Optimization of Remote Core Locking Synchronization in Multithreaded Programs...ITIIIndustries
This paper proposes the algorithms for optimization of Remote Core Locking (RCL) synchronization method in multithreaded programs. The algorithm of initialization of RCL-locks and the algorithms for threads affinity optimization are developed. The algorithms consider the structures of hierarchical computer systems and non-uniform memory access (NUMA) to minimize execution time of RCLprograms. The experimental results on multi-core computer systems represented in the paper shows the reduction of RCLprograms execution time.
Soft Real-Time Guarantee for Control Applications Using Both Measurement and ...CSCJournals
In this paper, we present a probabilistic admission control algorithm over switched Ethernet to support soft real-time control applications with heterogeneous periodic flows. Our approach is purely end host based, and it enables real-time application-to-application QoS management over switched Ethernet without sophisticated packet scheduling or resource reservation mechanisms in Ethernet switches or middleware on end hosts. In designing the probabilistic admission control algorithm, we employ both measurement and analytical techniques. In particular, we provide a new and efficient method to identify and estimate the queueing delay probability inside Ethernet switches for heterogeneous periodic flows with variable message size and period. We implement the probabilistic admission control algorithm on the Windows operating system, and validated its efficacy through extensive experiments.
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...CSCJournals
An ideal Network Processor, that is, a programmable multi-processor device must be capable of offering both the flexibility and speed required for packet processing. But current Network Processor systems generally fall short of the above benchmarks due to traffic fluctuations inherent in packet networks, and the resulting workload variation on individual pipeline stage over a period of time ultimately affects the overall performance of even an otherwise sound system. One potential solution would be to change the code running at these stages so as to adapt to the fluctuations; a near robust system with standing traffic fluctuations is the dynamic adaptive processor, reconfiguring the entire system, which we introduce and study to some extent in this paper. We achieve this by using a crucial decision making model, transferring the binary code to the processor through the SOAP protocol.
Problems in Task Scheduling in Multiprocessor Systemijtsrd
This Contemporary computer systems are multiprocessor or multicomputer machines. Their efficiency depends on good methods of administering the executed works. Fast processing of a parallel application is possible only when its parts are appropriately ordered in time and space. This calls for efficient scheduling policies in parallel computer systems. In this work deterministic problems of scheduling are considered. The classical scheduling theory assumed that the application in any moment of time is executed by only one processor. This assumption has been weakened recently, especially in the context of parallel and distributed computer systems. This monograph is devoted to problems of deterministic scheduling applications (or tasks according to the scheduling terminology) requiring more than one processor simultaneously. We name such applications multiprocessor tasks. In this work the complexity of open multiprocessor task scheduling problems has been established. Algorithms for scheduling multiprocessor tasks on parallel and dedicated processors are proposed. For a special case of applications with regular structure which allow for dividing it into parts of arbitrary size processed independently in parallel, a method of finding optimal scattering of work in a distributed computer system is proposed. The applications with such regular characteristics are called divisible tasks. The concept of a divisible task enables creation of tractable computation models in a wide class of computer architectures such as chains, stars, meshes, hypercubes, multistage networks. Divisible task method gives rise to the evaluation of computer system performance. Examples of such performance evaluation are presented. This work summarizes earlier works of the author as well as contains new original results. Mukul Varshney | Jyotsna | Abhakiran Rajpoot | Shivani Garg"Problems in Task Scheduling in Multiprocessor System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-4 , June 2017, URL: http://www.ijtsrd.com/papers/ijtsrd2198.pdf http://www.ijtsrd.com/computer-science/computer-architecture/2198/problems-in-task-scheduling-in-multiprocessor-system/mukul-varshney
Super-linear speedup for real-time condition monitoring using image processi...IJECEIAES
Real-time inspections for the large-scale solar system may take a long time to get the hazard situations for any failures that may take place in the solar panels normal operations, where prior hazards detection is important. Reducing the execution time and improving the system’s performance are the ultimate goals of multiprocessing or multicore systems. Real-time video processing and analysis from two camcorders, thermal and charge-coupling devices (CCD), mounted on a drone compose the embedded system being proposed for solar panels inspection. The inspection method needs more time for capturing and processing the frames and detecting the faulty panels. The system can determine the longitude and latitude of the defect position information in real-time. In this work, we investigate parallel processing for the image processing operations which reduces the processing time for the inspection systems. The results show a super-linear speedup for real-time condition monitoring in large-scale solar systems. Using the multiprocessing module in Python, we execute fault detection algorithms using streamed frames from both video cameras. The experimental results show a superlinear speedup for thermal and CCD video processing, the execution time is efficiently reduced with an average of 3.1 times and 6.3 times using 2 processes and 4 processes respectively.
Multiple Downlink Fair Packet Scheduling Scheme in Wi-MaxEditor IJCATR
IEEE 802.16 is standardization for a broadband wireless access in network metropolitan area network (MAN). IEEE 802.16
standard (Wi-Max) defines the concrete quality of service (QoS) requirement, a scheduling scheme and efficient packet scheduling
scheme which is necessary to achieve the QoS requirement. In this paper, a novel waiting queue based on downlink bandwidth
allocation architecture from a number of rtps schedule has been proposed to improve the performance of nrtPS services without any
impaction to other services. This paper proposes an efficient QoS scheduling scheme that satisfies both throughput and delay guarantee
to various real and non-real applications corresponding to different scheduling schemes for k=1,2,3,4. Simulation results show that
proposed scheduling scheme can provide a tight QoS guarantee in terms of delay for all types of traffic as defined in WiMax standards.
This process results in maintaining the fairness of allocation and helps to eliminate starvation of lower priority class services. The
authors propose a new efficient and generalized scheduling schemes for IEEE 802.16 broadband wireless access system reflecting the
delay requirements.
ANALYSIS OF LINK STATE RESOURCE RESERVATION PROTOCOL FOR CONGESTION MANAGEMEN...ijgca
With the wide spread of WiFi hotspots, concentrated traffic workload on Smart Web (SW) can slow down the network performance. This paper presents a congestion management strategy considering real time activities in today’s smart web. With the SW context, cooperative packet recovery using resource reservation procedure for TCP flows was adapted for mitigating packet losses. This is to maintain data consistency between various access points of smart web hotspot. Using a real world scenario, it was confirmed that generic TCP cannot handle traffic congestion in a SW hotspot network. With TCP in scalable workload environments, continuous packet drops at the event of congestion remains obvious. This is unacceptable for mission critical domains. An enhanced Link State Resource Reservation Protocol (LSRSVP) which serves as dynamic feedback mechanism in smart web hotspots is presented. The contextual behaviour was contrasted with the generic TCP model. For the LS-RSVP, a simulation experiment for TCP connection between servers at the remote core layer and the access layer was carried out while using selected benchmark metrics. From the results, under realistic workloads, a steady-state throughput response was achieved by TCP LS-RSVP to about 3650Bits/secs compared with generic TCP plots in a previous study. Considering network service availability, this was found to be dependent on fault-tolerance of the hotspot network. From study, a high peak threshold of 0.009 (i.e. 90%) was observed. This shows fairly acceptable service availability behaviour compared with the existing TCP schemes. For packet drop effects, an analysis on the network behaviour with respect to the LS-RSVP yielded a drop response of about 0.000106 bits/sec which is much lower compared with the case with generic TCP with over 0.38 bits/sec. The latency profile of average FTP download response was found to be 0.030secs, but with that of FTP upload response, this yielded about 0.028 sec. The results from the study demonstrate efficiency and optimality for realistic loads in Smart web contexts.
Analysis of Link State Resource Reservation Protocol for Congestion Managemen...ijgca
With the wide spread of WiFi hotspots, concentrated traffic workload on Smart Web (SW) can slow down
the network performance. This paper presents a congestion management strategy considering real time
activities in today’s smart web. With the SW context, cooperative packet recovery using resource
reservation procedure for TCP flows was adapted for mitigating packet losses. This is to maintain data
consistency between various access points of smart web hotspot. Using a real world scenario, it was
confirmed that generic TCP cannot handle traffic congestion in a SW hotspot network. With TCP in
scalable workload environments, continuous packet drops at the event of congestion remains obvious. This
is unacceptable for mission critical domains. An enhanced Link State Resource Reservation Protocol (LSRSVP)
which serves as dynamic feedback mechanism in smart web hotspots is presented. The contextual
behaviour was contrasted with the generic TCP model. For the LS-RSVP, a simulation experiment for TCP
connection between servers at the remote core layer and the access layer was carried out while using
selected benchmark metrics. From the results, under realistic workloads, a steady-state throughput
response was achieved by TCP LS-RSVP to about 3650Bits/secs compared with generic TCP plots in a
previous study. Considering network service availability, this was found to be dependent on fault-tolerance
of the hotspot network. From study, a high peak threshold of 0.009 (i.e. 90%) was observed. This shows
fairly acceptable service availability behaviour compared with the existing TCP schemes. For packet drop
effects, an analysis on the network behaviour with respect to the LS-RSVP yielded a drop response of about
0.000106 bits/sec which is much lower compared with the case with generic TCP with over 0.38 bits/sec.
The latency profile of average FTP download response was found to be 0.030secs, but with that of FTP
upload response, this yielded about 0.028 sec. The results from the study demonstrate efficiency and
optimality for realistic loads in Smart web contexts.
A NETWORK-BASED DAC OPTIMIZATION PROTOTYPE SOFTWARE 2 (1).pdfSaiReddy794166
The International Journal of Engineering and Science and Research is online journal in English published. The aim is to publish peer review and research articles without delay in the developing in engineering and science Research.The International Journal of Engineering and Science and Research is online journal in English published. The aim is to publish peer review and research articles without delay in the developing in engineering and science Research.
Comparative analysis of the performance of various active queue management te...IJECEIAES
This paper demonstrates the robustness of active queue management techniques to varying load, link capacity and propagation delay in a wireless environment. The performances of four standard controllers used in Transmission Control Protocol/Active Queue Management (TCP/AQM) systems were compared. The active queue management controllers were the Fixed-Parameter Proportional Integral (PI), Random Early Detection (RED), Self-Tuning Regulator (STR) and the Model Predictive Control (MPC). The robustness of the congestion control algorithm of each technique was documented by simulating the varying conditions using MATLAB® and Simulink® software. From the results obtained, the MPC controller gives the best result in terms of response time and controllability in a wireless network with varying link capacity and propagation delay. Thus, the MPC controller is the best bet when adaptive algorithms are to be employed in a wireless network environment. The MPC controller can also be recommended for heterogeneous networks where the network load cannot be estimated.
System on Chip Based RTC in Power ElectronicsjournalBEEI
Current control systems and emulation systems (Hardware-in-the-Loop, HIL or Processor-in-the-Loop, PIL) for high-end power-electronic applications often consist of numerous components and interlinking busses: a micro controller for communication and high level control, a DSP for real-time control, an FPGA section for fast parallel actions and data acquisition, multiport RAM structures or bus systems as interconnecting structure. System-on-Chip (SoC) combines many of these functions on a single die. This gives the advantage of space reduction combined with cost reduction and very fast internal communication. Such systems become very relevant for research and also for industrial applications. The SoC used here as an example combines a Dual-Core ARM 9 hard processor system (HPS) and an FPGA, including fast interlinks between these components. SoC systems require careful software and firmware concepts to provide real-time control and emulation capability. This paper demonstrates an optimal way to use the resources of the SoC and discusses challenges caused by the internal structure of SoC. The key idea is to use asymmetric multiprocessing: One core uses a bare-metal operating system for hard real time. The other core runs a “real-time” Linux for service functions and communication. The FPGA is used for flexible process-oriented interfaces (A/D, D/A, switching signals), quasi-hard-wired protection and the precise timing of the real-time control cycle. This way of implementation is generally known and sometimes even suggested–but to the knowledge of the author’s seldomly implemented and documented in the context of demanding real-time control or emulation. The paper details the way of implementation, including process interfaces, and discusses the advantages and disadvantages of the chosen concept. Measurement results demonstrate the properties of the solution.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
Similar to A Novel Approach in Scheduling Of the Real- Time Tasks In Heterogeneous Multicore Processor with Fuzzy Logic Technique For Micro-grid Power Management (20)
The aim of this research is the speed tracking of the permanent magnet synchronous motor (PMSM) using an intelligent Neural-Network based adapative backstepping control. First, the model of PMSM in the Park synchronous frame is derived. Then, the PMSM speed regulation is investigated using the classical method utilizing the field oriented control theory. Thereafter, a robust nonlinear controller employing an adaptive backstepping strategy is investigated in order to achieve a good performance tracking objective under motor parameters changing and external load torque application. In the final step, a neural network estimator is integrated with the adaptive controller to estimate the motor parameters values and the load disturbance value for enhancing the effectiveness of the adaptive backstepping controller. The robsutness of the presented control algorithm is demonstrated using simulation tests. The obtained results clearly demonstrate that the presented NN-adaptive control algorithm can provide good trackingperformances for the speed trackingin the presence of motor parameter variation and load application.
This paper presents a fast and accurate fault detection, classification and direction discrimination algorithm of transmission lines using one-dimensional convolutional neural networks (1D-CNNs) that have ingrained adaptive model to avoid the feature extraction difficulties and fault classification into one learning algorithm. A proposed algorithm is directly usable with raw data and this deletes the need of a discrete feature extraction method resulting in more effective protective system. The proposed approach based on the three-phase voltages and currents signals of one end at the relay location in the transmission line system are taken as input to the proposed 1D-CNN algorithm. A 132kV power transmission line is simulated by Matlab simulink to prepare the training and testing data for the proposed 1D- CNN algorithm. The testing accuracy of the proposed algorithm is compared with other two conventional methods which are neural network and fuzzy neural network. The results of test explain that the new proposed detection system is efficient and fast for classifying and direction discrimination of fault in transmission line with high accuracy as compared with other conventional methods under various conditions of faults.
Among the most widespread renewable energy sources is solar energy; Solar panels offer a green, clean, and environmentally friendly source of energy. In the presence of several advantages of the use of photovoltaic systems, the random operation of the photovoltaic generator presents a great challenge, in the presence of a critical load. Among the most used solutions to overcome this problem is the combination of solar panels with generators or with the public grid or both. In this paper, an energy management strategy is proposed with a safety aspect by using artificial neural networks (ANNs), in order to ensure a continuous supply of electricity to consumers with a maximum solicitation of renewable energy.
In this paper, the artificial neural network (ANN) has been utilized for rotating machinery faults detection and classification. First, experiments were performed to measure the lateral vibration signals of laboratory test rigs for rotor-disk-blade when the blades are defective. A rotor-disk-blade system with 6 regular blades and 5 blades with various defects was constructed. Second, the ANN was applied to classify the different x- and y-axis lateral vibrations due to different blade faults. The results based on training and testing with different data samples of the fault types indicate that the ANN is robust and can effectively identify and distinguish different blade faults caused by lateral vibrations in a rotor. As compared to the literature, the present paper presents a novel work of identifying and classifying various rotating blade faults commonly encountered in rotating machines using ANN. Experimental data of lateral vibrations of the rotor-disk-blade system in both x- and y-directions are used for the training and testing of the network.
This paper focuses on the artificial bee colony (ABC) algorithm, which is a nonlinear optimization problem. is proposed to find the optimal power flow (OPF). To solve this problem, we will apply the ABC algorithm to a power system incorporating wind power. The proposed approach is applied on a standard IEEE-30 system with wind farms located on different buses and with different penetration levels to show the impact of wind farms on the system in order to obtain the optimal settings of control variables of the OPF problem. Based on technical results obtained, the ABC algorithm is shown to achieve a lower cost and losses than the other methods applied, while incorporating wind power into the system, high performance would be gained.
The significance of the solar energy is to intensify the effectiveness of the Solar Panel with the use of a primordial solar tracking system. Here we propounded a solar positioning system with the use of the global positioning system (GPS) , artificial neural network (ANN) and image processing (IP) . The azimuth angle of the sun is evaluated using GPS which provide latitude, date, longitude and time. The image processing used to find sun image through which centroid of sun is calculated and finally by comparing the centroid of sun with GPS quadrate to achieve optimum tracking point. Weather conditions and situation observed through AI decision making with the help of IP algorithms. The presented advance adaptation is analyzed and established via experimental effects which might be made available on the memory of the cloud carrier for systematization. The proposed system improve power gain by 59.21% and 10.32% compare to stable system (SS) and two-axis solar following system (TASF) respectively. The reduced tracking error of IoT based Two-axis solar following system (IoT-TASF) reduces their azimuth angle error by 0.20 degree.
Kosovo has limited renewable energy resources and its power generation sector is based on fossil fuels. Such a situation emphasizes the importance of active research and efficient use of renewable energy potential. According to the analysis of meteorological data for Kosovo, it can be concluded that among the most attractive potential wind power sites are the locations known as Kitka (42° 29' 41" N and 21° 36' 45" E) and Koznica (42° 39′ 32″ N, 21° 22′30″E). The two terrains in which the analysis was carried out are mountain areas, with altitudes of 1142 m (Kitka) and 1230 m (Koznica). the same measuring height, about 84 m above the ground, is obtained for these average wind speeds: Kitka 6,667 m/s and Koznica 6,16 m/s. Since the difference in wind speed is quite large versus a difference in altitude that is not being very large, analyses are made regarding the terrain characteristics including the terrain relief features. In this paper it will be studied how much the roughness of the terrain influences the output energy. Also, that the assumption to be taken the same as to how much they will affect the annual energy produced.
Large-scale grid-tied photovoltaic (PV) station are increasing rapidly. However, this large penetration of PV system creates frequency fluctuation in the grid due to the intermittency of solar irradiance. Therefore, in this paper, a robust droop control mechanism of the battery energy storage system (BESS) is developed in order to damp the frequency fluctuation of the multi-machine grid system due to variable active power injected from the PV panel. The proposed droop control strategy incorporates frequency error signal and dead-band for effective minimization of frequency fluctuation. The BESS system is used to consume/inject an effective amount of active power based upon the frequency oscillation of the grid system. The simulation analysis is carried out using PSCAD/EMTDC software to prove the effectiveness of the proposed droop control-based BESS system. The simulation result implies that the proposed scheme can efficiently curtail the frequency oscillation.
This study investigates experimentally the performance of two-dimensional solar tracking systems with reflector using commercial silicon based photovoltaic module, with open and closed loop control systems. Different reflector materials were also investigated. The experiments were performed at the Hashemite University campus in Zarqa at a latitude of 32⁰, in February and March. Photovoltaic output power and performance were analyzed. It was found that the modified photovoltaic module with mirror reflector generated the highest value of power, while the temperature reached a maximum value of 53 ̊ C. The modified module suggested in this study produced 5% more PV power than the two-dimensional solar tracking systems without reflector and produced 12.5% more PV power than the fixed PV module with 26⁰ tilt angle.
This paper focuses on the modeling and control of a wind energy conversion chain using a permanent magnet synchronous machine. This system behaves a turbine, a generator, DC/DC and DC/AC power converters. These are connected on both sides to the DC bus, where the inverter is followed by a filter which is connected to the grid. In this paper, we have been used two types of controllers. For the stator side converter, we consider the Takagi-Sugeno approach where the parameters of controller have been computed by the theory of linear matrix inequalities. The stability synthesis has been checked using the Lyapunov theory. According to the grid side converter, the proportional integral controller is exploited to keep a constant voltage on the DC bus and control both types of powers. The simulation results demonstrate the robustness of the approach used.
The development of modeling wind speed plays a very important in helping to obtain the actual wind speed data for the benefit of the power plant planning in the future. The wind speed in this paper is obtained from a PCE-FWS 20 type measuring instrument with a duration of 30 minutes which is accumulated into monthly data for one year (2019). Despite the many wind speed modeling that has been done by researchers. Modeling wind speeds proposed in this study were obtained from the modified Rayleigh distribution. In this study, the Rayleigh scale factor (Cr) and modified Rayleigh scale factor (Cm) were calculated. The observed wind speed is compared with the predicted wind characteristics. The data fit test used correlation coefficient (R2), root means square error (RMSE), and mean absolute percentage error (MAPE). The results of the proposed modified Rayleigh model provide very good results for users.
This paper deals with an advanced design for a pump powered by solar energyto supply agricultural lands with water and also the maximum power point is used to extract the maximum value of the energy available inside the solar panels and comparing between techniques MPPT such as Incremental conductance, perturb & observe, fractional short current circuit, and fractional open voltage circuit to find the best technique among these. The solar system is designed with main parts: photovoltaic (PV) panel, direct current/direct current (DC/DC) converter, inverter, filter, and in addition, the battery is used to save energy in the event that there is an increased demand for energy and not to provide solar radiation, as well as saving energy in the case of generation more than demand. This work was done using the matrix laboratory (MATLAB) simulink program.
The objective of this paper is to provide an overview of the current state of renewable energy resources in Bangladesh, as well as to examine various forms of renewable energies in order to gain a comprehensive understanding of how to address Bangladesh's power crisis issues in a sustainable manner. Electricity is currently the most useful kind of energy in Bangladesh. It has a substantial influence on a country's socioeconomic standing and living standards. Maintaining a stable source of energy at a cost that is affordable to everyone has been a constant battle for decades. Bangladesh is blessed with a wealth of natural resources. Bangladesh has a huge opportunity to accelerate its economic development while increasing energy access, livelihoods, and health for millions of people in a sustainable way due to the renewable energy system.
When the irradiance distribution over the photovoltaic panels is uniform, the pursuit of the maximum power point is not reached, which has allowed several researchers to use traditional MPPT techniques to solve this problem Among these techniques a PSO algorithm is used to have the maximum global power point (GMPPT) under partial shading. On the other hand, this one is not reliable vis-à-vis the pursuit of the MPPT. Therefore, in this paper we have treated another technique based on a new modified PSO algorithm so that the power can reach its maximum point. The PSO algorithm is based on the heuristic method which guarantees not only the obtaining of MPPT but also the simplicity of control and less expensive of the system. The results are obtained using MATLAB show that the proposed modified PSO algorithm performs better than conventional PSO and is robust to different partial shading models.
A stable operation of wind turbines connected to the grid is an essential requirement to ensure the reliability and stability of the power system. To achieve such operational objective, installing static synchronous compensator static synchronous compensator (STATCOM) as a main compensation device guarantees the voltage stability enhancement of the wind farm connected to distribution network at different operating scenarios. STATCOM either supplies or absorbs reactive power in order to ensure the voltage profile within the standard-margins and to avoid turbine tripping, accordingly. This paper present new study that investigates the most suitable-location to install STATCOM in a distribution system connected wind farm to maintain the voltage-levels within the stability margins. For a large-scale squirrel cage induction generator squirrel-cage induction generator (SCIG-based) wind turbine system, the impact of STATCOM installation was tested in different places and voltage-levels in the distribution system. The proposed method effectiveness in enhancing the voltage profile and balancing the reactive power is validated, the results were repeated for different scenarios of expected contingencies. The voltage profile, power flow, and reactive power balance of the distribution system are observed using MATLAB/Simulink software.
The electrical and environmental parameters of polymer solar cells (PSC) provide important information on their performance. In the present article we study the influence of temperature on the voltage-current (I-V) characteristic at different temperatures from 10 °C to 90 °C, and important parameters like bandgap energy Eg, and the energy conversion efficiency η. The one-diode electrical model, normally used for semiconductor cells, has been tested and validated for the polemeral junction. The PSC used in our study are formed by the poly(3-hexylthiophene) (P3HT) and [6,6]-phenyl C61-butyric acid methyl ester (PCBM). Our technique is based on the combination of two steps; the first use the Least Mean Squares (LMS) method while the second use the Newton-Raphson algorithm. The found results are compared to other recently published works, they show that the developed approach is very accurate. This precision is proved by the minimal values of statistical errors (RMSE) and the good agreement between both the experimental data and the I-V simulated curves. The obtained results show a clear and a monotonic dependence of the cell efficiency on the studied parameters.
The inverter is the principal part of the photovoltaic (PV) systems that assures the direct current/alternating current (DC/AC) conversion (PV array is connected directly to an inverter that converts the DC energy produced by the PV array into AC energy that is directly connected to the electric utility). In this paper, we present a simple method for detecting faults that occurred during the operation of the inverter. These types of faults or faults affect the efficiency and cost-effectiveness of the photovoltaic system, especially the inverter, which is the main component responsible for the conversion. Hence, we have shown first the faults obtained in the case of the short circuit. Second, the open circuit failure is studied. The results demonstrate the efficacy of the proposed method. Good monitoring and detection of faults in the inverter can increase the system's reliability and decrease the undesirable faults that appeared in the PV system. The system behavior is tested under variable parameters and conditions using MATLAB/Simulink.
The electrical distribution network is undergoing tremendous modifications with the introduction of distributed generation technologies which have led to an increase in fault current levels in the distribution network. Fault current limiters have been developed as a promising technology to limit fault current levels in power systems. Though, quite a number of fault current limiters have been developed; the most common are the superconducting fault current limiters, solid-state fault current limiters, and saturated core fault current limiters. These fault current limiters present potential fault current limiting solutions in power systems. Nevertheless, they encounter various challenges hindering their deployment and commercialization. This research aimed at designing a bridge-type nonsuperconducting fault current limiter with a novel topology for distribution network applications. The proposed bridge-type nonsuperconducting fault current limiter was designed and simulated using PSCAD/EMTDC. Simulation results showed the effectiveness of the proposed design in fault current limiting, voltage sag compensation during fault conditions, and its ability not to affect the load voltage and current during normal conditions as well as in suppressing the source powers during fault conditions. Simulation results also showed very minimal power loss by the fault current limiter during normal conditions.
This paper provides a new approach to reducing high-order harmonics in 400 Hz inverter using a three-level neutral-point clamped (NPC) converter. A voltage control loop using the harmonic compensation combined with NPC clamping diode control technology. The capacitor voltage imbalance also causes harmonics in the output voltage. For 400 Hz inverter, maintain a balanced voltage between the two input (direct current) (DC) capacitors is difficult because the pulse width modulation (PWM) modulation frequency ratio is low compared to the frequency of the output voltage. A method of determining the current flowing into the capacitor to control the voltage on the two balanced capacitors to ensure fast response reversal is also given in this paper. The combination of a high-harmonic resonator controller and a neutral-point voltage controller working together on the 400 Hz NPC inverter structure is given in this paper.
Direct current (DC) electronic load is a useful equipment for testing the electrical system. It can emulate various load at a high rating. The electronic load requires a power converter to operate and a linear regulator is a common option. Nonetheless, it is hard to control due to the temperature variation. This paper proposed a DC electronic load using the boost converter. The proposed electronic load operates in the continuous current mode and control using the integral controller. The electronic load using the boost converter is compared with the electronic load using the linear regulator. The results show that the boost converter able to operate as an electronic load with an error lower than 0.5% and response time lower than 13 ms.
More from International Journal of Power Electronics and Drive Systems (20)
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
2. Int J Power Electron & Dri Syst ISSN: 2088-8694
A Novel Approach in Scheduling Of the Real- Time Tasks In Heterogeneous Multicore… (Lavanya Dhanesh)
81
integrates both spatial and temporal parallelism to benefit from the advantages of both sides. It forms a
parallel pipeline core topology, where each stage contains multiple parallel cores, such as Random and Bipar
[6,7].
Traditional task scheduling schemes, such as list-based scheduling and clustering based scheduling,
are capable of reducing program latency by exploiting fine grained task-level parallelism [8,9,10]. The
Performance Analysis of Preemptive Based Uniprocessor Scheduling was discussed for the real time task
[11] . The Processor Speed Control for the Real-Time Systems for power reduction was analysed [12].But
these scheduling schemes do not apply pipelining process and they suffer from significant throughput
deterioration when executing periodic packet processing tasks. Many researchers presented the results on
reducing protocol latency for high-speed gateways and telecommunication systems based on hybrid
parallelism. Developing a packet processing system that considers both latency and throughput for multicore
architectures is both interesting and challenging [13,14]. Thus, we present a latency and throughput-aware
scheduling scheme based on parallel-pipeline topology.
Along with increased throughput and reduced latency, however, comes increased power
consumption for network applications running on multicore architecture. Collectively, millions of servers in
the global network consume a great deal of power. The chip manufactures continue to increase both the
number of cores and their frequencies, substantially increasing both dynamic and static power consumption.
Higher power consumption increases costs, both directly and indirectly. Energy itself is expected to become
more expensive, especially if environmental impacts are factored into consumption. Higher power
consumption also increases core temperature, which exponentially increases the cost of cooling and
packaging . Higher temperatures also increase indirect and life-cycle costs due to reduced system
performance, circuit reliability, and chip life-time. Therefore, power management is a first-order design issue.
As we propose the parallel-pipeline scheduling on task-level, we realize that there has been no existing work
considering the power budget issues for it. Previous power-aware algorithms either have not considered
latency, or have not explored the parallel pipeline topology for task scheduling. Since power gating cannot be
directly applied to task scheduling, we resort to DVFS to integrate power-awareness into parallel pipeline
scheduling.
2. METHODOLOGIES
2.1 Latency And Throughput Aware Scheduling (LATA)
We propose LATA, a latency and Throughput-Aware packet processing system for multicore
architectures. It adopts hybrid parallelism with parallel pipeline core topology in fine-grained task level to
achieve low latency and high throughput. We accomplish the above goal through the following three steps.
First, we design a list-based pipeline scheduling algorithm from the task graph. Second, we apply a
deterministic search-based refinement process to reduce latency and improve throughput through local
adjustment. Third, we devise a cache-aware resource mapping scheme to generate a practical mapping onto a
real machine.
To the best of our knowledge, LATA is the first of its kind to consider both latency and throughput
in packet processing systems. We implement LATA on an Intel machine with two Quad-Core Xeon E5335
processors and conduct extensive experiments to show its better performance over other systems such as
Parallel, Greedy, Random and Bipar. Based on six real packet processing applications chosen from Net
Bench and Packet Bench, LATA exhibits an average of 36.5% reduction of latency across all applications
without substantially degrading the throughput. It shows a maximum of 62.2% reduction of latency for URL
application over Random with comparable throughput performance.
2.2 Lata System Design
In the LATA’s system design , we first generate its corresponding task graph with both computation
and communication information. Then, we proceed in a three-step procedure to schedule and map the task
graph according to our novel design. Last, we deploy the program onto a real multicore machine to obtain its
performance result.
3. ISSN: 2088-8694
Int J Power Electron & Dri Syst, Vol. 9, No. 1, March 2018 : 80 – 88
82
Figure 1. LATA system design flow chart.
2.3 Communication measurement
The communication time cannot be accurately measured between two cores in a multicore
architecture unless we know the exact location of the cores. In LATA design, we use the average
communication cost based on data cache access time, as given in Equations 1 and 2. Comm avg means the
average communication cost to transfer a unit data set, which can be approximated by system memory
latencies (L1, L2 and main memory access time) and program data cache performances (L1 and L2 cache hit
rate). Data Size refers to the transferred data set size between two communicating tasks.
Commavg= Commavg . Data Size (1)
Commavg = ((TL1 . Hit L1) + ( TL2 . (1 − Hit L1 )). HitL2 + Tmem . ((1 − Hit L1 ). (1 − Hit L2 ))) (2)
As we know, the throughput can be calculated by the inverse of the longest stage time 1/Tmax in
pipelining. Thus, we form our objective function in Equation 3, where L is the scheduled latency.
2.4 Problem Statement
The latency can be defined as the schedule length of a program and throughput as the system
throughput. The problem statement is: given the latency constraint L0, schedule a packet processing program
in parallel pipeline core topology so as to maximize the throughput Th. The aim is to rearrange the tasks into
the parallel pipeline task graph as shown in Figure 3, so that the total execution time T1 + T2 + T3 + T4 is
minimized while maintaining the throughput as high as possible. As we know, the throughput can be
calculated by the inverse of the longest stage time 1/Tmax in pipelining. Thus, we form our objective
function in Equation 3, where L is the scheduled latency.
Maximize Th =
1
Tmax
( s. t. at L ≤ L0)
(3)
3 PROPOSED LATENCY REDUCTION
Latency can be reduced by reducing either computation time or communication time. Because
computation dominates the overall execution time for most packet processing applications running on
multicore architectures, we prioritize computation reduction in designing LATA. Hence, LATA first applies
latency hiding to reduce computation time .
Then, CCP elimination and CCP reduction are used to reduce communication time. Computation
reduction: We defined a critical node as the node in a pipeline stage which dominates the computation time.
Then, Latency hiding can be defined as a technique that places a critical node from one stage to one of its
adjacent stages without violating dependencies, so that its computation time is shadowed by the other critical
node in the new stage. Backward hiding (BaH) refers to placing a critical node into its precedent stage.
Forward hiding (FoH) refers to placing a critical node into its following stage which is shown in Figure 2 .
4. Int J Power Electron & Dri Syst ISSN: 2088-8694
A Novel Approach in Scheduling Of the Real- Time Tasks In Heterogeneous Multicore… (Lavanya Dhanesh)
83
Figure 2. Latency hiding on node E.
4 PERFORMANCE EVALUATION
The latency and throughput for six applications by LATA, Parallel and List are shown in the Figures
3 and 4. We observe that Parallel suffers from high latency due to its sequential execution of tasks. Compared
with Parallel, LATA reduces the latency by an average of 34.2%.
Figure 3. Latency of six applications by LATA,
Parallel and List
Figure 4. Throughput of six applications by LATA,
Parallel and List
Particularly, for URL, LATA achieves the maximal latency reduction of 62.2%. In addition,
LATA’s throughput is close to that of Parallel in spite of the 75% latency constraint. This is because LATA
is capable of optimizing its parallel pipeline core topology to produce good throughput. With respect to List,
which is designed to produce the lowest latency, LATA actually matches its latency performance in most
cases by aggressively exploiting task-level parallelism. Furthermore, LATA outperforms List in throughput
by an average of 41.0% and a maximum of 56.7% for Route.
5. POWER AWARE PARALLEL PIPELINE SCHEDULING ALGORITHM
We introduce the novel parallel-pipeline scheduling on task-level for network applications that can
attain high throughput under given latency constraints. In this chapter, we address the power budget issue for
this scheduling paradigm for network packet processing. We aim at optimizing both throughput and latency
under given power budget by appropriately applying per-core DVFS. We propose a three-step solution to
achieve our goal.
5.1 A three-step recursive algorithm
STEP 1: In the first step, we reduce the power without compromising throughput or latency by
keeping the pipeline stage time Ti, i = 1, 2... S unchanged. We define a critical node as the node in a pipeline
stage that dominates the computation time. Therefore, the computation time of a critical node is equal to the
pipeline stage time (ti = Ti). For each stage Si, we increase the computation time of non-critical nodes in that
5. ISSN: 2088-8694
Int J Power Electron & Dri Syst, Vol. 9, No. 1, March 2018 : 80 – 88
84
stage to the length of Ti. Since all stage times remain the same, the throughput and the latency will also keep
unchanged during this step which is depicted in the Figure 5.
Figure 5. Illustration of the first step of the algorithm Figure 6. Illustration of the second step of the
algorithm
Figure 7. Illustration of the third step of the algorithm
STEP 2: In the second step, we reduce the power with throughput unchanged and minimal latency
increase which is shown in Figure 6 . This is achieved by keeping the longest stage time Tmax unchanged
while we increase the stage time of other stages. We denote the stage with Tmax as the bottleneck stage in
the pipeline. Thus, all other stages are non-bottleneck stages. We define ∆T as the shortest time period by
which we can increase the latency. To minimize the latency increase, we iteratively increase the latency by
∆T until the power budget is satisfied or all the stages reach Tmax. If the former comes true, the algorithm
returns and the resulting scheduling guarantees the minimal latency increase, which will be proved shortly.
Otherwise, if the latter comes true, we proceed to step three.
STEP 3: In the third step, we reduce the power by minimizing both the throughput and the latency
performance loss. After step two, every stage has the same stage time Tmax. Following the same rule of
choosing a candidate stage in step two, we optimally choose a stage to further increase its stage time by ∆T.
Since the original Tmax is increased, the throughput is compromised accordingly which is shown in the
Figure 7 .
6. POWER MODEL
Consider that task T consists of C clock cycles on processor P, which runs at voltage V and
frequency f. We assume that C does not change with different V and f. For a given voltage V , processor P
has an average power consumption Pow. It is known that processor power consumption is dominated by
dynamic power dissipation given by:
Pow = Ka . f. V2 (4)
6. Int J Power Electron & Dri Syst ISSN: 2088-8694
A Novel Approach in Scheduling Of the Real- Time Tasks In Heterogeneous Multicore… (Lavanya Dhanesh)
85
Where Ka is a task/processor dependent factor determined by the switched capacitance.
The energy consumed by executing task T on processor P is computed as:
E = C . Pow f (5)
We can rewrite it as:
E = C. Ef (6)
V = C. Ka . V2 (7)
Where Ef, V is the average cycle energy.
From this we can see that lowering the voltage would yield a drastic decrease in energy
consumption. The frequency f is almost linearly related to the voltage:
f = Kb . ( V − VT)(V − VT )2
. V (8)
Where VT is the threshold voltage and Kb is a constant. For a sufficiently small threshold voltage,
the frequency is approximated to Kb.
However, our algorithm is able to guarantee a minimal performance loss in this scenario. The proof
of optimality is in line with that in step two, where the minimal time period increment guarantees that when
we satisfy the power budget constraint, the performance loss is minimal which is shown in the Figure 8.
Figure 8. The power-aware parallel-pipeline scheduling algorithm.
In our system, since energy increases when performance degrades (i.e., a longer execution time), we
assign a higher weight to performance by setting αc = 0.15 αm. Specifically, the lost factors for GPU cores
and memory are calculated as:
l_Ci
t
= αc . Cie
t
+ (1 − αc ) (l_Cip
t
) (9)
Then we combined core and memory lost functions together by a factor, which balances core impact
and memory impact in influencing system performance and energy.
Total Loss ij
t
= ∅ . l_ Ci
t
+ (1 − ∅ ). l_ mj
t
(10)
Shows how total lost function is obtained. For different CPU-GPU systems, by tuning φ value the
system can achieve balance between core and memory influence. In our hardware tested, 0.33 is the value
reflects system characteristic derived from experiments. Based on the total loss, the weights used in the
frequency scaling algorithm can be updated as follows.
weight ij
(t+1)
= ((weightij
t
)( 1 − ( 1 − β). Total Loss ij
t
) (11)
7. ISSN: 2088-8694
Int J Power Electron & Dri Syst, Vol. 9, No. 1, March 2018 : 80 – 88
86
The algorithm will be more robust to system noise. A smaller β gives more weight on loss factor of
current time interval. The algorithm will respond to workload change in a short time. In our experiment, we
select β = 0.2 to filter out system noise with quick workload change response. Almond the entire N × M
weights (assume we have N core frequency levels and M memory frequency levels), the highest one is
selected and its corresponding core and memory frequencies are enforced in the next period.
6. EXPERIMENTAL RESULTS
The power and thermal-aware scheduling results for different benchmarks from the embedded
system synthesis benchmarks suite or generated using the MATLAB tool. The process of Task scheduling is
shown in the Figure 9. The ILP solutions were generated using OPENMP. The simulations were performed
on a dual core system of 3 GHZ processor. The output execution and the results using the fuzzy logic
techniques are shown in the Figures 10,11and12.The execution time and the relative deadline are adjusted to
get the desired priority of the tasks. The modified priority order based on the fuzzy logic output is shown in
the Figure 13. The corresponding waveforms are depicted in the Figure 14. As the power information of the
processing elements in these benchmarks is not available, approximation values based on the internal
structure of each core is used. Based on these approximated values, the power consumption of each core is
adjusted by replacing by an estimated number of gates in the module. The hardware prototype model for the
Real - Time task scheduling in Heterogeneous Multicore Processor for Microgrid Power Management is
shown in the Figure15.
Figure 9.Task scheduling process
Figure 10. Executing output according to Rules Figure 11. Input Waveform
Task Token Generators
Task Token Manager
Tasks
in
Value of 'A'
Task 1
Next task 1
Task 2
Task 3
Next task 3
Task 4
Task In
Task Out
Task Dispatcher
ISR Token in
Task Token in
Task Token Out
Task Delay
f unction()
Out1
Task 4 (+1)
f unction()
Out1
Task 3 (+2)
f unction()
Out1
Task 2 (-3)
f unction()
Out1
Task 1 (+10)
Task Complete
Plots
f unction()
Out1
Monitor
HP Token
Interrupt-Priority Task
Token Generator Task ID 1
f unction()
Out1
Initial Value
f()
Initial Event
A
Data Store Memory
LP Token
Application-Priority Task
Token Generator
Task ID 3
Start LP Token
Application-Priority
Token Generator
Task ID 4
Start LP Token
Application-Priority
Token Generator
Task ID 2
IN1
IN2
IN3
OUT
Application
Task Combiner
8. Int J Power Electron & Dri Syst ISSN: 2088-8694
A Novel Approach in Scheduling Of the Real- Time Tasks In Heterogeneous Multicore… (Lavanya Dhanesh)
87
Figure 12 . Output Waveforms Figure 13. Adjusting Execution and Deadline for
Getting Priority
Figure 14. Output Waveform Figure 15. Prototype Hardware model
7. CONCLUSION
Current research on GPU-CPU systems focuses mainly on the performance aspects, while the
energy efficiency of such systems receives much less attention. There are few existing studies that start to
lower the energy consumption of GPU-CPU architectures, but they address either GPU or CPU in an isolated
manner and thus cannot achieve maximized energy savings. In this paper, we have presented Green GPU, a
holistic energy management framework for GPU-CPU heterogeneous architectures. Our solution features a
two-tier design. In the first tier, Green GPU dynamically splits and distributes workloads to GPU and CPU
based on the workload characteristics, such that both sides can finish approximately at the same time. As a
result, the energy wasted on staying idle and waiting for the slower side to finish is minimized. In the second
tier, Green GPU dynamically throttles the frequencies of GPU cores and memory in a coordinated manner,
based on their utilizations, for maximized energy savings with only marginal performance degradation.
Likewise, the frequency and voltage of the CPU are scaled similarly. We implement Green GPU using the
CUDA framework on a real physical tested with Nvidia GeForce GPUs and AMD Phenom II CPUs.
Experiment results with standard Rodinia benchmarks show that Green GPU achieves 21.04% average
energy savings and outperform several well-designed baselines.
REFERENCES
[1] Cavium octeon processor family.http://www.caviumnetworks.com/OCTEONMIPS64.html.
[2] IBM BladeCenter System. http://www-03.ibm.com/systems/bladecenter.
[3] IBM POWER7 Systems. http://www-03.ibm.com/systems/power.
[4] Intel ixp2xxx product line of network processors. http://intel.com/design/network/products/npfamily/index.html.
[5] Intel xeon machine. http://www.Intel.com/Xeon.
9. ISSN: 2088-8694
Int J Power Electron & Dri Syst, Vol. 9, No. 1, March 2018 : 80 – 88
88
[6] Chen, D. Zhang and Z. Wang, “Research of the heterogeneous multi-Core processor architecture design [J]”,
Computer Engineering and Science, vol. 33, no. 12, (2011), pp. 27-36.
[7] S K. Baruah, “Partitioning real-time tasks among heterogeneous multiprocessors”, In: Proc. of the 2004
International Conference on Parallel Processing, ICPP, Toronto, Canada, (2004), pp. 467-474.
[8] R. Li, Y. Liu and X. Cheng, “A Survey of task scheduling research progress on multiprocessor”, Journal of
Computer Research and Development, vol. 45, no. 9, (2008), pp. 1620-1629.
[9] J. Li and S. Jin, “Research on static task scheduling strategy based on heterogeneous multi-core processors [J]”,
Computer Engineering and Design, vol. 34, no. 1, (2013), pp. 178-184.
[10] J. Jiang, “Research on embedded software key issues of heterogeneous multi-core processor [D]”, Chong Qing
University, (2011).
[11] M Shanmugasundaram , R. Kumar and Harish M Kittur “Performance Analysis of Preemptive Based Uniprocessor
Scheduling” in the International Journal of Electrical and Computer Engineering (IJECE) Vol. 6, No. 4, August
2016, pp. 1489 -1498
[12] Medhat H Awadalla “Processor Speed Control for Power Reduction of Real-Time Systems” International Journal
of Electrical and Computer Engineering (IJECE) Vol. 5, No. 4, August 2015, pp. 701-713.
[13] S. Albers, “Energy-efficient algorithms,” Commun. ACM, vol. 53, no. 5, pp. 86–96, May 2010. [Online].
Available: http://doi.acm.org/10.1145/1735223.1735245.
[14] P. Cichowski, J. Keller, and C. Kessler, “Energy-efficient mapping of task collections onto manycore processors,”
in Proc. 5th Swedish Workshop on Mulitcore Computing (MCC 2012), 2012.
BIOGRAPHIES OF AUTHORS
Mrs Lavanya Dhanesh received her B.E. degree in Electrical and Electronics Engineering from
Bharathiyar University at 2002. She completed her M.E. in Embedded System Design from
Anna University ,Chennai at 2009. She is a Research Scholar at Sathyabama University and
currently working at Panimalar Institute of Technology,Chennai
Dr P.Murugesan has done his specialization in Power Systems. He is currently working
as a professor /EEE at S.A.Engineering college,Chennai. He gives research ideas and
much interested in the field of low-power computing, fault tolerance and real-time
embedded systems.