The document describes a probabilistic approach and tool called ALEA for fine-grain energy profiling with low overhead. ALEA uses basic block sampling to estimate execution time and energy consumption at the basic block level. It models power consumption as a normal distribution and uses maximum likelihood estimation to calculate confidence intervals for time and energy estimates. ALEA has been validated against direct instrumentation on benchmarks, achieving average errors of 1.4-3.7% for time and energy. It has been used to optimize applications through identifying hot blocks and guiding optimization strategies.
An updated tutorial on using Wannier90 with the VASP code for electronic-structure calculations. Includes tips on how to build VASP with Wannier90 support, how to use the VASP-to-Wannier90 interface, and a worked example of calculating the electronic band structure and density of states of SnS2 using the PBE and HSE06 functionals and the GW routines.
Some "accumulated wisdom" from several years of using the Vienna ab initio Simulation Package (VASP) code for computational modelling. Includes tips on convergence and parallelisation.
This paper propose a new approach to determine a linear mathematical model of a PV moduel based on an accurate nonlinear model . In this study, electrical parameters at only one operating condition are calculated based on an accurate model. Then, first-order Taylor series approximations apply on the nonlinear model to estimate the proposed model at any operating conditionts. The proposed method determines the number of iteration times. This decreases calculation time and the speed of numerical convergence will be increased. And, it is observed that owing to this method, the system converged and the problem of failing to solve the system because of inappropriate initial values is eliminated. The proposed model is requested in order to allow photovoltaic plants simulations using low-cost computer platforms. The effectiveness of the proposed model is demonstrated for different temperature and irradiance values through conducting a comparison between result of the proposed model and experimental results obtained from the module data-sheet information.
Security Constrained UCP with Operational and Power Flow ConstraintsIDES Editor
An algorithm to solve security constrained unit
commitment problem (UCP) with both operational and power
flow constraints (PFC) have been proposed to plan a secure
and economical hourly generation schedule. This proposed
algorithm introduces an efficient unit commitment (UC)
approach with PFC that obtains the minimum system
operating cost satisfying both unit and network constraints
when contingencies are included. In the proposed model
repeated optimal power flow for the satisfactory unit
combinations for every line removal under given study period
has been carried out to obtain UC solutions with both unit and
network constraints. The system load demand patterns have
been obtained for the test case systems taking into account of
the hourly load variations at the load buses by adding
Gaussian random noises. In this paper, the proposed
algorithm has been applied to obtain UC solutions for IEEE
30, 118 buses and Indian utility practical systems scheduled
for 24 hours. The algorithm and simulation are carried
through Matlab software and the results obtained are quite
encouraging.
AN EFFICIENT ALGORITHM FOR WRAPPER AND TAM CO-OPTIMIZATION TO REDUCE TEST APP...IAEME Publication
System-on-Chip (SOC) designs composed of many embedded cores are ubiquitous in today’s integrated circuits. Each of these cores requires to be tested separately after manufacturing of the SoC. That’s why, modular testing is adopted for core-based SoCs, as it promotes test reuse and permits the cores to be tested without comprehensive knowledge about their internal structural details. Such modular testing triggers the need of a special test access mechanism (TAM) to build communication between core I/Os and TAM and promises to minimize overall test time. In this paper, various issues are analyzed to optimize the Wrapper and TAM, which comprises the optimal partitioning of TAM width, assignment of cores to partitioned TAM width etc.
A new T-circuit model of wind turbine generator for power system steady state...journalBEEI
Modeling of wind power plant (WPP) is a crucial issue in power system studies. In this paper, a new model of WPP for steady state (i.e. load flow) studies is proposed. Similar to the previous T-circuit based models, it is also developed based on equivalent T-circuit of the WPP induction generator. However, unlike in the previous models, the mathematical formulation of the new model is shorter and less complicated. Moreover, the derivation of the model in the present work is also much simpler. Only minimal mathematical operations are required in the process. Furthermore, the rotor voltage value of the WPP induction generator is readily available as an output of the proposed new model. This rotor voltage value can be used as a basis to calculate the induction generator slip. Validity of the new method is tested on a representative 9-bus electrical power system installed with WPP. Comparative studies between the proposed method (new model) and other method (previous model) are also presented.
Improving efficiency of Photovoltaic System with Neural Network Based MPPT Co...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Photovoltaic (PV) technology is one of the important renewable energy resources as it is pollution free and clean. PV systems have a high cost of energy and low eciency, consequently, they not made it fully attractive as an alternative option for electricity users. It is essential that PV systems are operated to extract the maximum possible power at all times. Maximum Power Point (MPP) changes with atmospheric conditions (radiation and temperature), it is dicult to sustain MPP at all atmospheric levels. Many Maximum Power Point Tracking (MPPT) have been developed and implemented. These methods varied according to several aspects such as a number of sensors used, complexity, accuracy, speed, ease of hardware implementation, cost and tracking eciency. The MPPT techniques presented in the literature indicate that Variable step size of Perturb & Observe (VP&O), Variable step size of Incremental Conductance (VINC) and Perturb & Observe (P&O) using Fuzzy Logic Controller (FLC) can achieve reliable global MPPT with low cost and complexity and be easily adapted to dierent PV systems. In this paper, we established theoretical and experimental verication of the main MPPT controllers (VP&O, VINC, and P&O using FLC MPPT algorithms) that most cited in the literature. The three MPPT controller has been tested by MATLAB/Simulink to analyze each technique under dierent atmospheric conditions. The experimental results show that the performance of VINC and P&O using FLC is better than VP&O in term of response time.
Energy Consumption Saving in Embedded Microprocessors Using Hardware Accelera...TELKOMNIKA JOURNAL
This paper deals with the reduction of power consumption in embedded microprocessors.
Computing power and energy efficiency are becoming the main challenges for embedded system
applications. This is, in particular, the caseof wearable systems. When the power supply is provided by
batteries, an important requirement for these systems is the long service life. This work investigates a
method for the reduction of microprocessor energy consumption, based on the use of hardware
accelerators. Their use allows to reduce the execution time and to decrease the clock frequency, so
reducing the power consumption. In order to provide experimental results, authors analyze a case of study
in the field of wearable devices for the processing of ECG signals. The experimental results show that the
use of hardware accelerator significantly reduces the power consumption.
The principal means to increase performance of modern high performance computing (HPC) applications is to use more processors in parallel. However, energy use increases linearly with the number of processing cores. Energy costs for top supercomputers already exceed 10M EUR per year, and the operating costs and carbon footprint of next generation systems (exaflop scale) are a major concern.
HPC applications use a parallel programming paradigm like the Message Passing Interface (MPI) to coordinate computation and communication among thousands of processors. With dynamically-changing factors both in hardware and software affecting energy usage of processors, there exists an opportunity for power monitoring and regulation at runtime to achieve savings in energy.
In this talk, an adaptive runtime framework is described that enables processors with core-specific power control to reduce power with little or no performance impact. Two opportunities to improve the energy efficiency of processors running MPI applications are identified - computational workload imbalance and memory system saturation.
The optimal solution for unit commitment problem using binary hybrid grey wol...IJECEIAES
The aim of this work is to solve the unit commitment (UC) problem in power systems by calculating minimum production cost for the power generation and finding the best distribution of the generation among the units (units scheduling) using binary grey wolf optimizer based on particle swarm optimization (BGWOPSO) algorithm. The minimum production cost calculating is based on using the quadratic programming method and represents the global solution that must be arriving by the BGWOPSO algorithm then appearing units status (on or off). The suggested method was applied on “39 bus IEEE test systems”, the simulation results show the effectiveness of the suggested method over other algorithms in terms of minimizing of production cost and suggesting excellent scheduling of units.
Multi Objective Directed Bee Colony Optimization for Economic Load Dispatch W...IJECEIAES
Earlier economic emission dispatch methods for optimizing emission level comprising carbon monoxide, nitrous oxide and sulpher dioxide in thermal generation, made use of soft computing techniques like fuzzy,neural network,evolutionary programming,differential evolution and particle swarm optimization etc..The above methods incurred comparatively more transmission loss.So looking into the nonlinear load behavior of unbalanced systems following differential load pattern prevalent in tropical countries like India,Pakistan and Bangladesh etc.,the erratic variation of enhanced power demand is of immense importance which is included in this paper vide multi objective directed bee colony optimization with enhanced power demand to optimize transmission losses to a desired level.In the current dissertation making use of multi objective directed bee colony optimization with enhanced power demand technique the emission level versus cost of generation has been displayed vide figure-3 & figure-4 and this result has been compared with other dispatch methods using valve point loading(VPL) and multi objective directed bee colony optimization with & without transmission loss.
An updated tutorial on using Wannier90 with the VASP code for electronic-structure calculations. Includes tips on how to build VASP with Wannier90 support, how to use the VASP-to-Wannier90 interface, and a worked example of calculating the electronic band structure and density of states of SnS2 using the PBE and HSE06 functionals and the GW routines.
Some "accumulated wisdom" from several years of using the Vienna ab initio Simulation Package (VASP) code for computational modelling. Includes tips on convergence and parallelisation.
This paper propose a new approach to determine a linear mathematical model of a PV moduel based on an accurate nonlinear model . In this study, electrical parameters at only one operating condition are calculated based on an accurate model. Then, first-order Taylor series approximations apply on the nonlinear model to estimate the proposed model at any operating conditionts. The proposed method determines the number of iteration times. This decreases calculation time and the speed of numerical convergence will be increased. And, it is observed that owing to this method, the system converged and the problem of failing to solve the system because of inappropriate initial values is eliminated. The proposed model is requested in order to allow photovoltaic plants simulations using low-cost computer platforms. The effectiveness of the proposed model is demonstrated for different temperature and irradiance values through conducting a comparison between result of the proposed model and experimental results obtained from the module data-sheet information.
Security Constrained UCP with Operational and Power Flow ConstraintsIDES Editor
An algorithm to solve security constrained unit
commitment problem (UCP) with both operational and power
flow constraints (PFC) have been proposed to plan a secure
and economical hourly generation schedule. This proposed
algorithm introduces an efficient unit commitment (UC)
approach with PFC that obtains the minimum system
operating cost satisfying both unit and network constraints
when contingencies are included. In the proposed model
repeated optimal power flow for the satisfactory unit
combinations for every line removal under given study period
has been carried out to obtain UC solutions with both unit and
network constraints. The system load demand patterns have
been obtained for the test case systems taking into account of
the hourly load variations at the load buses by adding
Gaussian random noises. In this paper, the proposed
algorithm has been applied to obtain UC solutions for IEEE
30, 118 buses and Indian utility practical systems scheduled
for 24 hours. The algorithm and simulation are carried
through Matlab software and the results obtained are quite
encouraging.
AN EFFICIENT ALGORITHM FOR WRAPPER AND TAM CO-OPTIMIZATION TO REDUCE TEST APP...IAEME Publication
System-on-Chip (SOC) designs composed of many embedded cores are ubiquitous in today’s integrated circuits. Each of these cores requires to be tested separately after manufacturing of the SoC. That’s why, modular testing is adopted for core-based SoCs, as it promotes test reuse and permits the cores to be tested without comprehensive knowledge about their internal structural details. Such modular testing triggers the need of a special test access mechanism (TAM) to build communication between core I/Os and TAM and promises to minimize overall test time. In this paper, various issues are analyzed to optimize the Wrapper and TAM, which comprises the optimal partitioning of TAM width, assignment of cores to partitioned TAM width etc.
A new T-circuit model of wind turbine generator for power system steady state...journalBEEI
Modeling of wind power plant (WPP) is a crucial issue in power system studies. In this paper, a new model of WPP for steady state (i.e. load flow) studies is proposed. Similar to the previous T-circuit based models, it is also developed based on equivalent T-circuit of the WPP induction generator. However, unlike in the previous models, the mathematical formulation of the new model is shorter and less complicated. Moreover, the derivation of the model in the present work is also much simpler. Only minimal mathematical operations are required in the process. Furthermore, the rotor voltage value of the WPP induction generator is readily available as an output of the proposed new model. This rotor voltage value can be used as a basis to calculate the induction generator slip. Validity of the new method is tested on a representative 9-bus electrical power system installed with WPP. Comparative studies between the proposed method (new model) and other method (previous model) are also presented.
Improving efficiency of Photovoltaic System with Neural Network Based MPPT Co...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Photovoltaic (PV) technology is one of the important renewable energy resources as it is pollution free and clean. PV systems have a high cost of energy and low eciency, consequently, they not made it fully attractive as an alternative option for electricity users. It is essential that PV systems are operated to extract the maximum possible power at all times. Maximum Power Point (MPP) changes with atmospheric conditions (radiation and temperature), it is dicult to sustain MPP at all atmospheric levels. Many Maximum Power Point Tracking (MPPT) have been developed and implemented. These methods varied according to several aspects such as a number of sensors used, complexity, accuracy, speed, ease of hardware implementation, cost and tracking eciency. The MPPT techniques presented in the literature indicate that Variable step size of Perturb & Observe (VP&O), Variable step size of Incremental Conductance (VINC) and Perturb & Observe (P&O) using Fuzzy Logic Controller (FLC) can achieve reliable global MPPT with low cost and complexity and be easily adapted to dierent PV systems. In this paper, we established theoretical and experimental verication of the main MPPT controllers (VP&O, VINC, and P&O using FLC MPPT algorithms) that most cited in the literature. The three MPPT controller has been tested by MATLAB/Simulink to analyze each technique under dierent atmospheric conditions. The experimental results show that the performance of VINC and P&O using FLC is better than VP&O in term of response time.
Energy Consumption Saving in Embedded Microprocessors Using Hardware Accelera...TELKOMNIKA JOURNAL
This paper deals with the reduction of power consumption in embedded microprocessors.
Computing power and energy efficiency are becoming the main challenges for embedded system
applications. This is, in particular, the caseof wearable systems. When the power supply is provided by
batteries, an important requirement for these systems is the long service life. This work investigates a
method for the reduction of microprocessor energy consumption, based on the use of hardware
accelerators. Their use allows to reduce the execution time and to decrease the clock frequency, so
reducing the power consumption. In order to provide experimental results, authors analyze a case of study
in the field of wearable devices for the processing of ECG signals. The experimental results show that the
use of hardware accelerator significantly reduces the power consumption.
The principal means to increase performance of modern high performance computing (HPC) applications is to use more processors in parallel. However, energy use increases linearly with the number of processing cores. Energy costs for top supercomputers already exceed 10M EUR per year, and the operating costs and carbon footprint of next generation systems (exaflop scale) are a major concern.
HPC applications use a parallel programming paradigm like the Message Passing Interface (MPI) to coordinate computation and communication among thousands of processors. With dynamically-changing factors both in hardware and software affecting energy usage of processors, there exists an opportunity for power monitoring and regulation at runtime to achieve savings in energy.
In this talk, an adaptive runtime framework is described that enables processors with core-specific power control to reduce power with little or no performance impact. Two opportunities to improve the energy efficiency of processors running MPI applications are identified - computational workload imbalance and memory system saturation.
The optimal solution for unit commitment problem using binary hybrid grey wol...IJECEIAES
The aim of this work is to solve the unit commitment (UC) problem in power systems by calculating minimum production cost for the power generation and finding the best distribution of the generation among the units (units scheduling) using binary grey wolf optimizer based on particle swarm optimization (BGWOPSO) algorithm. The minimum production cost calculating is based on using the quadratic programming method and represents the global solution that must be arriving by the BGWOPSO algorithm then appearing units status (on or off). The suggested method was applied on “39 bus IEEE test systems”, the simulation results show the effectiveness of the suggested method over other algorithms in terms of minimizing of production cost and suggesting excellent scheduling of units.
Multi Objective Directed Bee Colony Optimization for Economic Load Dispatch W...IJECEIAES
Earlier economic emission dispatch methods for optimizing emission level comprising carbon monoxide, nitrous oxide and sulpher dioxide in thermal generation, made use of soft computing techniques like fuzzy,neural network,evolutionary programming,differential evolution and particle swarm optimization etc..The above methods incurred comparatively more transmission loss.So looking into the nonlinear load behavior of unbalanced systems following differential load pattern prevalent in tropical countries like India,Pakistan and Bangladesh etc.,the erratic variation of enhanced power demand is of immense importance which is included in this paper vide multi objective directed bee colony optimization with enhanced power demand to optimize transmission losses to a desired level.In the current dissertation making use of multi objective directed bee colony optimization with enhanced power demand technique the emission level versus cost of generation has been displayed vide figure-3 & figure-4 and this result has been compared with other dispatch methods using valve point loading(VPL) and multi objective directed bee colony optimization with & without transmission loss.
Performance prediction of PV & PV/T systems using Artificial Neural Networks ...Ali Al-Waeli
This presentation offers insight into use of ANN and machine learning for various applications in solar energy. Prepared and presented by Dr. Ali H. A. Alwaeli.
Optimal and Power Aware BIST for Delay Testing of System-On-ChipIDES Editor
Test engineering for fault tolerant VLSI systems is
encumbered with optimization requisites for hardware
overhead, test power and test time. The high level quality of
these complex high-speed VLSI circuits can be assured only
through delay testing, which involves checking for accurate
temporal behavior. In the present paper, a data-path based
built-in test pattern generator (TPG) that generates iterative
pseudo-exhaustive two-patterns (IPET) for parallel delay
testing of modules with different input cone capacities is
implemented. Further, in the present study a CMOS
implementation of low power architecture (LPA) for scan based
built-in self test (BIST) for delay testing and combinational
testing is carried out. This reduces test power dissipation in
the circuit under test (CUT). Experimental results and
comparisons with pre-existing methods prove the reduction
in hardware overhead and test-time.
Design of High Speed Low Power 15-4 Compressor Using Complementary Energy Pat...CSCJournals
This paper presents the implementation of a novel high speed low power 15-4 Compressor for high speed multiplication applications using single phase clocked quasi static adiabatic logic namely CEPAL (Complementary Energy Path Adiabatic Logic). The main advantage of this static adiabatic logic is the minimization of the 1/2CVth2 energy dissipation occurring every cycle in the multi-phase power-clocked adiabatic circuits. The proposed Compressor uses bit sliced architecture to exploit the parallelism in the computation of sum of 15 input bits by five full adders. The newly proposed Compressor is also centered around the design of a novel 5-3 Compressor that attempts to minimize the stage delays of a conventional 5-3 Compressor that is designed using single bit full adder and half adder architectures. Firstly, the performance characteristics of CEPAL 15-3 Compressor with 14 transistor and 10 transistor adder designs are compared against the conventional static CMOS logic counterpart to identify its adiabatic power advantage. The analyses are carried out using the industry standard Tanner EDA design environment using 250 nm technology libraries. The results prove that CEPAL 14T 15-4 Compressor is 68.11% power efficient, 75.31% faster over its static CMOS counterpart.
With the rise of containerization, as well as the established adoption of virtualization technologies, run-time power and energy management is becoming one of the key challenges in modern cloud computing. This is also fundamental as power consumption contributes to the 20% of the Total Cost of Ownership of a datacenter and energy costs will exceed hardware costs in the near future. In this context, several goals towards power optimization can be achieved. On the one hand, power capping can be enforced and on top of that the system should be able to maximize performance. On the other hand, when performance are critical, the system should be able to provide a minimum SLA and optimize power consumption without violating it. Within this context, we propose a common autonomic methodology based on the ODA control loop for containers and virtual machines. The proposed methodology is able to achieve 25% power savings for containers and can improve performance under a power cap for virtual machines.
SIMULTANEOUS OPTIMIZATION OF STANDBY AND ACTIVE ENERGY FOR SUB-THRESHOLD CIRC...VLSICS Design
Increased downscaling of CMOS circuits with respect to feature size and threshold voltage has a result of
dramatically increasing in leakage current. So, leakage power reduction is an important design issue for
active and standby modes as long as the technology scaling increased. In this paper, a simultaneous active
and standby energy optimization methodology is proposed for 22 nm sub-threshold CMOS circuits. In the
first phase, we investigate the dual threshold voltage design for active energy per cycle minimization. A
slack based genetic algorithm is proposed to find the optimal reverse body bias assignment to set of noncritical paths gates to ensure low active energy per cycle with the maximum allowable frequency at the
optimal supply voltage. The second phase, determine the optimal reverse body bias that can be applied to
all gates for standby power optimization at the optimal supply voltage determined from the first phase.
Therefore, there exist two sets of gates and two reverse body bias values for each set. The reverse body bias
is switched between these two values in response to the mode of operation. Experimental results are
obtained for some ISCAS-85 benchmark circuits such as 74L85, 74283, ALU74181, and 16 bit RCA. The
optimized circuits show significant energy saving ranged (from 14.5% to 42.28%) and standby power
saving ranged (from 62.8% to 67%).
SIMULTANEOUS OPTIMIZATION OF STANDBY AND ACTIVE ENERGY FOR SUB-THRESHOLD CIRC...VLSICS Design
Increased downscaling of CMOS circuits with respect to feature size and threshold voltage has a result of
dramatically increasing in leakage current. So, leakage power reduction is an important design issue for
active and standby modes as long as the technology scaling increased. In this paper, a simultaneous active
and standby energy optimization methodology is proposed for 22 nm sub-threshold CMOS circuits. In the
first phase, we investigate the dual threshold voltage design for active energy per cycle minimization. A
slack based genetic algorithm is proposed to find the optimal reverse body bias assignment to set of noncritical
paths gates to ensure low active energy per cycle with the maximum allowable frequency at the
optimal supply voltage. The second phase, determine the optimal reverse body bias that can be applied to
all gates for standby power optimization at the optimal supply voltage determined from the first phase.
Therefore, there exist two sets of gates and two reverse body bias values for each set. The reverse body bias
is switched between these two values in response to the mode of operation. Experimental results are
obtained for some ISCAS-85 benchmark circuits such as 74L85, 74283, ALU74181, and 16 bit RCA. The
optimized circuits show significant energy saving ranged (from 14.5% to 42.28%) and standby power
saving ranged (from 62.8% to 67%).
SIMULTANEOUS OPTIMIZATION OF STANDBY AND ACTIVE ENERGY FOR SUB-THRESHOLD CIRC...VLSICS Design
Increased downscaling of CMOS circuits with respect to feature size and threshold voltage has a result of dramatically increasing in leakage current. So, leakage power eduction is an important design issue for active and standby modes as long as the technology scaling increased. In this paper, a simultaneous active and standbyrgy optimization methodology is proposed for 22 nm sub-threshold CMOS circuits. In thefirst phase, we investigate the dual threshold voltage design for active energy per cycle minimization. A slack based genetic algorithm is proposed to find the optimal reverse body bias assignment to set of noncritical paths gates to ensure low active energy per cycle with the maximum allowable frequency at the optimal supply voltage. The second phase, determine the optimal reverse body bias that can be applied to
all gates for standby power optimization at the optimal supply voltage determined from the first phase. Therefore, there exist two sets of gates and two reverse body bias values for each set. The reverse body bias is switched between these two values in response to the mode of operation. Experimental results are obtained for some ISCAS-85 benchmark circuits such as 74L85, 74283, ALU74181, and 16 bit RCA. The optimized circuits show significant energy saving ranged (from 14.5% to 42.28%) and standby power saving ranged (from 62.8% to 67%).
SIMULTANEOUS OPTIMIZATION OF STANDBY AND ACTIVE ENERGY FOR SUB-THRESHOLD CIRC...VLSICS Design
Increased downscaling of CMOS circuits with respect to feature size and threshold voltage has a result of dramatically increasing in leakage current. So, leakage power reduction is an important design issue for active and standby modes as long as the technology scaling increased. In this paper, a simultaneous active and standby energy optimization methodology is proposed for 22 nm sub-threshold CMOS circuits. In the first phase, we investigate the dual threshold voltage design for active energy per cycle minimization. A
slack based genetic algorithm is proposed to find the optimal reverse body bias assignment to set of noncritical
paths gates to ensure low active energy per cycle with the maximum allowable frequency at the optimal supply voltage. The second phase, determine the optimal reverse body bias that can be applied to all gates for standby power optimization at the optimal supply voltage determined from the first phase.
Therefore, there exist two sets of gates and two reverse body bias values for each set. The reverse body bias is switched between these two values in response to the mode of operation. Experimental results are obtained for some ISCAS-85 benchmark circuits such as 74L85, 74283, ALU74181, and 16 bit RCA. The optimized circuits show significant energy saving ranged (from 14.5% to 42.28%) and standby power
saving ranged (from 62.8% to 67%)
Condition Monitoring of a Large-scale PV Power Plant in AustraliaAmit Dhoke
This presentation considers the problems of condition
monitoring and fault detection in an existing solar photovoltaic
(PV) plant in Australia. A PV prediction model is proposed to
accurately model the PV plant output. This model is then used
with three condition monitoring and fault detection methods.
The considered methods involve comparison of measured and
modeled voltage and current ratios with appropriate thresholds
and adjacent string values. The procedure to calculate
thresholds is described. The proposed PV model and the
condition monitoring approaches are collectively used with real
PV data. As a result, the string disconnection and bypassed
module faults are detected. It is also found that the string level
monitoring is ideally suited for reliable condition monitoring
and fault detection, especially for large PV plants.
Optimal Power Flow with Reactive Power Compensation for Cost And Loss Minimiz...ijeei-iaes
One of the concerns of power system planners is the problem of optimum cost of generation as well as loss minimization on the grid system. This issue can be addressed in a number of ways; one of such ways is the use of reactive power support (shunt capacitor compensation). This paper used the method of shunt capacitor placement for cost and transmission loss minimization on Nigerian power grid system which is a 24-bus, 330kV network interconnecting four thermal generating stations (Sapele, Delta, Afam and Egbin) and three hydro stations to various load points. Simulation in MATLAB was performed on the Nigerian 330kV transmission grid system. The technique employed was based on the optimal power flow formulations using Newton-Raphson iterative method for the load flow analysis of the grid system. The results show that when shunt capacitor was employed as the inequality constraints on the power system, there is a reduction in the total cost of generation accompanied with reduction in the total system losses with a significant improvement in the system voltage profile
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilitySciAstra
The Indian Statistical Institute (ISI) has extended its application deadline for 2024 admissions to April 2. Known for its excellence in statistics and related fields, ISI offers a range of programs from Bachelor's to Junior Research Fellowships. The admission test is scheduled for May 12, 2024. Eligibility varies by program, generally requiring a background in Mathematics and English for undergraduate courses and specific degrees for postgraduate and research positions. Application fees are ₹1500 for male general category applicants and ₹1000 for females. Applications are open to Indian and OCI candidates.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
What is greenhouse gasses and how many gasses are there to affect the Earth.
ALEA:Fine-grain Energy Profiling with Basic Block sampling
1. 1
ALEA:Fine-grain Energy Profiling with Basic Block
sampling
Lev Mukhanov,Dimitrios S. Nikolopoulos and Bronis R. de Supinski
Queen’s University of Belfast
PACT 2015
2. 2
Executive summary
Fine-grain energy profiling is essential for energy
optimization
Contribution:
Probabilistic approach and a tool(ALEA) for
fine-grain energy profiling
7. 7
Fine-grain energy profiling challenges
Coarse-grained power/energy meters
Any measurements bias real energy
Overhead introduced by measurements is critical
8. 8
Fine-grain energy profiling challenges
Coarse-grained power/energy meters
Any measurements bias real energy
Overhead introduced by measurements is critical
9. 9
State of the art approaches
Manual Instrumentation[PowerPack,R.Ge 2010]
Low overhead
Coarse-grain
What code should be instrumented?
Source code should be modified
Binary Instrumentation
Fine-grain
Overhead(PIN - overhead more than 300%)
HPM(Hardware Performance Monitos)[R.Bertran 2013]
EPI(Energy Per Instruction) models[Y.S. Shao 2013]
Low overhead
Do not capture the dynamic execution context
Low accuracy
Sampling[PowerScope,J.Flinn 1999]
Low overhead
Is it fine-grain?
10. 10
Performance profiling based on Sampling
Performance profiling model: a period between samples is
associated with the sampled object =⇒ coarse-grain
probabilistic model
12. 12
Probabilistic model
Execution time of a block:
timeblock = pblock · timeapplication (1)
Estimation of ˆpblock using sampling
Estimation of execution time:
ˆtimeblock = ˆpblock · timeapplication (2)
Power measurements and a sample are taken simultaneously
to estimate ˆpowerblock
Estimation of energy consumption:
ˆenergyblock = ˆpowerblock · ˆtimeblock (3)
13. 13
Random sampling ≈ Systematic sampling
Application
Time(ticks)1 2 ... 1023 1024 ... 5990
power
block2
power
block2
power
block3
power
block9
Random sampling
Application
Time(ticks)... 23 ... 1023
power
block5
power
block2
power
block9
Systematic sampling
... 1023
1000 1000
random
18. 18
Sampling period and accuracy of the estimates
Accuracy ∼ the number of samples
0 2000 4000 6000 8000 10000
Number of samples
0.28
0.29
0.30
0.31
0.32
0.33
0.34Executiontime,Sec
Random error
Time
0 2000 4000 6000 8000 10000
Number of samples
0.35
0.40
0.45
0.50
0.55
Energy,J
Confidence interval
Energy
estimated time/energy
measured time/energy
Sampling period =⇒ Accuracy
19. 19
Sampling period and accuracy of the estimates
Sampling incurs overhead - bias of the estimates
↓ sampling period ↓ random error ↑ overhead
↑ sampling period ↑ random error ↓ overhead
sampling period ↓↑?
1 2 5 8 10 15 20 25 50 100
Sampling period,ms
0
5
10
15
20
25
30
Overhead,%
Optimal:10 ms
Overhead ∼ 1%
Sandy Bridge
Overhead(sequential)
Overhead(parallel)
0
5
10
15
20
25
30
Error,%
1 2 5 8 10 15 20 25 50 100
Sampling period,ms
0
5
10
15
20
25
30
Overhead,%
Optimal:10 ms
Overhead ∼ 1%
Exynos
0
5
10
15
20
25
30
Error,%
Error(sequential)
Error(parallel)
20. 20
Validation
14 benchmarks(SPEC 2000, Parsec, Rodinia, SPEC OMP)
direct instrumentaion
81% coverage
Energy estimates Average Error
Sandy Bridge Exynos
all blocks 1.4 % 2.6 %
fine-grain blocks 1.6 % 3.7 %
parallel blocks 3.1 % 3.6 %
all bench 1.4 % 1.9 %
21. 21
Effect of cache instructions and pipelining
Arithmetic
Original
Cache
0
2
4
6
8
10
12
Power,W
Sandy Bridge
Arithmetic
Cache
Original
0.0
0.5
1.0
1.5
2.0
Power,W
Exynos
Arithmetic
Cache
Original
EPIOriginal
0
500
1000
1500
2000
2500
Energy,J
50%
Sandy Bridge
Energy
0
50
100
150
200
250
Time,Sec
Arithmetic
Cache
Original
EPIOriginal
0
100
200
300
400
500
600
700
800
Energy,J
29%
Exynos
0
100
200
300
400
500
600
Time,Sec
Time
Pipelining hides latency( =⇒ energy) of cache accesses
EPI models could lead to significant errors
22. 22
Use cases
kmeans/Sandy Bridge
profiling: 50 % of the total energy is spent on one block(Euclidean
distance)
optimization strategy:align and to restrict pointers,forced unroll
results: 7x energy reduction
ocean cp/Exynos
profiling: more than 50% of energy is spent on 6 blocks
optimization strategy: disable predictive commoning
optimization
results: 10 % power reduction
raytrace/Exynos
profiling: 50% of the total energy is spent on 2
blocks(SphPeIntersect)
optimization strategy: remove redundant memory accesses
and indirect addressing instruction
results: reduce energy by 6 %
23. 23
Conclusion
The proposed probabilistic approach and ALEA provides:
low overhead(∼ 1 %,on-line profiling)
accurate estimates(Intel ∼ 1.4 %,ARM ∼ 2.6 %)
estimates at the fine-grain level
architecture-independent approach
ALEA could be effectively applied to optimize energy and
power consumption
Future work:
improve accuracy of the estimates
port to new architectures(GPUs and Intel Xeon Phi)
profiling of VMs
26. 25
Probabilistic model
Random sampling is approximated by systematic sampling
Power and a basic block are sampled simultaneously
For each block time, energy and power estimates are provided
For each estimate a confidence interval is provided
See the paper for more details
27. 26
Use cases
kmeans.Sandy Bridge
50 % of energy is spent on one block(Euclidean distance)
optimization strategy:align and to restrict pointers,forced unroll
results: 7x energy reduction
ocean cp.Exynos
more than 50% of energy is spent on 6 blocks
optimization strategy: disable predictive commoning
optimization
results: 10 % power reduction
raytrace.Exynos
50% of energy is spent on 2 blocks (SphPeIntersect)
optimization strategy: remove redundant memory accesses
and indirect addressing instruction
results: reduce energy by 6 %
28. 27
Use cases.Sandy Bridge
56 % of time is spent on one block(Euclidean distance)
Problems:unroll and auto-vectorization are not applied
Optimization strategy:align and to restrict pointers,forced
unroll
Results: 7x energy decrease
0 1 2 3 4 5 6 7 8 9
Threads
0
5
10
15
20
25
30
Time,Sec
Cache sharing effectCache sharing effect
basic block -O3
basic block -O3 + hints
0 1 2 3 4 5 6 7 8 9
Threads
5
10
15
20
25
30
35
40
45
Power,W
0 1 2 3 4 5 6 7 8 9
Threads
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Energy,100J
221 Joules(697 %)
30. 29
Impact of Memory instructions
Nop
Arithm
Mem(L1,store)
Mem(L1,load)
Mem(store)
Mem(L1)
Original
Mem(L2,store)
Mem(load)
Mem(L2,load)
Mem
Mem(L2)
0
2
4
6
8
10
12
Power,W
cache access intensity
Sandy Bridge
Arithm
Nop
Mem(L1,load)
Mem(L1)
Mem(L1,store)
Mem(L2,load)
Mem(L2,store)
Mem(L2)
Original(L2)
0.0
0.5
1.0
1.5
2.0
Power,W
cache access intensity
Exynos
CPU power is primarily affected by cache accesses
31. 30
Probability to sample a basic block
Basic block execution
Introduce Xbbm associated with each tick:
Xbbm =
1, if bbm is the sampled basic block
0, otherwise
(5)
Take one random sampling. Probability that bbm is sampled:
pbbm = P(Xbbm = 1) =
C1
tbbm
C1
texec
=
k
j=1 latencyj
bbm
texec
=
tbbm
texec
(6)
32. 31
Execution time estimates
Take samples several times.Random sampling
Xbbm random and follows the Bernoulli distribution
Estimate pbbm using the maximum likelihood estimator of
parameter pbbm in the Bernoulli distribution for Xbbm
ˆpbbm =
nbbm
n
(7)
tbbm is estimated as
ˆtbbm = ˆpbbm · texec =
nbbm · texec
n
(8)
33. 32
Power and Energy estimates
The same probabilistic approach
Power consumption is random variable(Normal distribution)
Implementation of the variable is associated with each tick
The mean power consumption of bbm:
ˆpowbbm =
1
nbbm
·
nbbm
i=1
powi
bbm (9)
Energy consumption of bbm:
ˆebbm = ˆpowbbm · ˆtbbm (10)
34. 33
Quality of time estimates
Confidence interval for pbbm
ˆpu
bbm = ˆpbbm + zα/2
1
n
· ˆpbbm · (1 − ˆpbbm) (11)
ˆpl
bbm = ˆpbbm − zα/2
1
n
· ˆpbbm · (1 − ˆpbbm) (12)
ˆpl
bbm ≤ p ≤ ˆpu
bbm (13)
Confidence interval for tbbm
ˆpl
bbm · texec ≤ tbbm ≤ ˆpu
bbm · texec (14)
35. 34
Bounds and Confidence.Energy
We can similarly build a confidence interval for power
ˆpowu
bbm = ˆpowbbm + zα/2
s
√
nbbm
(15)
ˆpowl
bbm = ˆpowbbm − zα/2
s
√
nbbm
(16)
s =
1
nbbm − 1
·
nbbm
i=1
(powi
bbm − ˆpowbbm)2 (17)
ˆpowl
bbm ≤ powbbm ≤ ˆpowu
bbm (18)
Confidence interval for energy consumption
ˆpl
bbm · texec · ˆpowl
bbm ≤ ebbm ≤ ˆpu
bbm · texec · ˆpowu
bbm (19)
37. 36
Experiments. Impact of Memory instruction
How to optimize energy consumption?
Performance vs Power optimization
How to decrease power consumption? What affects power
consumption?
Block Description
Basic block A Copy of BBA
Mem Only memory access instructions of BBA
NoMem Only arithmetic/logic instructions of BBA
Mem(L2) Mem block with the size of accessed
data limited to 2MB (L2 cache size on Exynos)
Mem(L1) Mem block with the size of accessed
data limited to 2KB (L1 cache size on Exynos)
Mem(load) Mem block with load instructions only
Mem(store) Mem block with store instructions only
Mem(L2,load) Mem(L2) block with loads only
Mem(L2,store) Mem(L2) block with stores only
Mem(L1,load) Mem(L1) block with loads only
Mem(L1,store) Mem(L1) block with stores only
38. 37
Use case(Exynos).Power optimization.ocean cp
more than 50% of the total execution time is spent in 6 basic
blocks
optimization strategy: remove redundant cache accesses
disable prefetch,predictive commoning optimization
(up to 14 % power decrease)
for each basic block different strategy should be applied
DVFS could be applied also...
Baseline Energy-optimal
Time(s) Energy (J) Time (s) Energy (J) Threads Frequency Manual optimization
bb1,jacobcalc2.C:301 2.03 8.48 1.87 6.03 4 1500 MHz No
bb2,slave2.C:641 1.54 6.70 1.31 4.16 2 1600 MHz Yes
bb3,laplacalc.C:83 2.02 9.53 2.55 7.98 2 1500 MHz No
bb4,multi.C:253 2.17 7.22 2.62 6.52 2 1500 MHz No
bb5,multi.C:235 2.36 7.88 3.29 5.56 1 1500 MHZ No
bb6,multi.C:290 2.67 9.23 3.23 5.46 1 1500 MHz No
program 29.93 108.64 26.88 72.84 2.0 (avg.) 1516 MHz (avg.) Yes