Escalating increase in the level of integration has led the design engineers to embed the pre-design and pre-verified logic blocks on chip to make a complete system on chip (SoC) technology. This advancing technology trend has led to new challenges for the design and test engineers. To ensure the testability of the entire system, the test planning needs to be done during design phase. To save the test cost, the test application time needs to be reduced which requires the test to be done concurrently. However the parallel running of test of multiple cores increases the power dissipation. This thereby leads to make test optimization to take care of time and power. This paper presents an approach for the scheduling the cores with the test time, power, test access mechanism and bandwidth constraint based on greedy algorithm. The TAM allotment to the various cores is done dynamically to save the test time and utilize the full bandwidth. Scheduling is done on ITC’02 benchmark circuits. Experiments on these ITC’02 benchmark circuits show that this algorithm offers lower test application time compared to the multiple constraint driven system-on-chip.
Buffer Allocation Problem is an important research issue in manufacturing system design.
Objective of this paper is to find optimum buffer allocation for closed queuing network with
multi servers at each node. Sum of buffers in closed queuing network is constant. Attempt is
made to find optimum number of pallets required to maximize throughput of manufacturing
system which has pre specified space for allocating pallets. Expanded Mean Value Analysis is
used to evaluate the performance of closed queuing network. Particle Swarm Optimization is
used as generative technique to optimize the buffer allocation. Numerical experiments are
shown to explain effectiveness of procedure
Novel Scheme for Minimal Iterative PSO Algorithm for Extending Network Lifeti...IJECEIAES
Clustering is one of the operations in the wireless sensor network that offers both streamlined data routing services as well as energy efficiency. In this viewpoint, Particle Swarm Optimization (PSO) has already proved its effectiveness in enhancing clustering operation, energy efficiency, etc. However, PSO also suffers from a higher degree of iteration and computational complexity when it comes to solving complex problems, e.g., allocating transmittance energy to the cluster head in a dynamic network. Therefore, we present a novel, simple, and yet a cost-effective method that performs enhancement of the conventional PSO approach for minimizing the iterative steps and maximizing the probability of selecting a better clustered. A significant research contribution of the proposed system is its assurance towards minimizing the transmittance energy as well as receiving energy of a cluster head. The study outcome proved proposed a system to be better than conventional system in the form of energy efficiency.
Nonlinear filtering approaches to field mapping by sampling using mobile sensorsijassn
This work proposes a novel application of existing powerful nonlinear filters, such as the standard
Extended Kalman Filter (EKF), some of its variants and the standard Unscented Kalman Filter (UKF), to
the estimation of a continuous spatio-temporal field that is spread over a wide area, and hence represented
by a large number of parameters when parameterized. We couple these filters with the powerful scheme of
adaptive sampling performed by a single mobile sensor, and investigate their performances with a view to
significantly improving the speed and accuracy of the overall field estimation. An extensive simulation work
was carried out to show that different variants of the standard EKF and the standard UKF can be used to
improve the accuracy of the field estimate. This paper also aims to provide some guideline for the user of
these filters in reaching a practical trade-off between the desired field estimation accuracy and the
required computational load.
Experimental Testing of a Real-Time Implementation of a PMU-Based Wide-Area D...Power System Operation
The modern power grid is being used under operating conditions of increasing stress, giving
rise to grid stability issues. One of these stability issues is the phenomenon of inter-area oscillations.
Simulations have demonstrated the advantages of Wide-area Measurement Signals (WAMS)-based Oscillation Damping Controls in achieving improved electromechanical mode damping compared to traditional,
local signal-based Power System Stabilizers (PSS). This work takes an existing Phasor-based oscillation
damping (POD) algorithm and uses it to implement a proof-of-concept, wide-area, real-time controller
on National Instruments hardware. The developed prototype is tested in a real-time Hardware-in-theloop setup (RT-HIL) using OPAL-RT’s eMEGASIM real-time simulation platform and synchrophasor data
from actual Phasor Measurement Units (PMUs). The prototype and experiments provide insight into the
feasibility and real-world limitations of wide-area controls. Further, it is demonstrated how the proposed
control architecture has applications independent of the controlled power system device. Challenges faced,
the solutions implemented together with the present prototype’s limitations are also discussed.
Buffer Allocation Problem is an important research issue in manufacturing system design.
Objective of this paper is to find optimum buffer allocation for closed queuing network with
multi servers at each node. Sum of buffers in closed queuing network is constant. Attempt is
made to find optimum number of pallets required to maximize throughput of manufacturing
system which has pre specified space for allocating pallets. Expanded Mean Value Analysis is
used to evaluate the performance of closed queuing network. Particle Swarm Optimization is
used as generative technique to optimize the buffer allocation. Numerical experiments are
shown to explain effectiveness of procedure
Novel Scheme for Minimal Iterative PSO Algorithm for Extending Network Lifeti...IJECEIAES
Clustering is one of the operations in the wireless sensor network that offers both streamlined data routing services as well as energy efficiency. In this viewpoint, Particle Swarm Optimization (PSO) has already proved its effectiveness in enhancing clustering operation, energy efficiency, etc. However, PSO also suffers from a higher degree of iteration and computational complexity when it comes to solving complex problems, e.g., allocating transmittance energy to the cluster head in a dynamic network. Therefore, we present a novel, simple, and yet a cost-effective method that performs enhancement of the conventional PSO approach for minimizing the iterative steps and maximizing the probability of selecting a better clustered. A significant research contribution of the proposed system is its assurance towards minimizing the transmittance energy as well as receiving energy of a cluster head. The study outcome proved proposed a system to be better than conventional system in the form of energy efficiency.
Nonlinear filtering approaches to field mapping by sampling using mobile sensorsijassn
This work proposes a novel application of existing powerful nonlinear filters, such as the standard
Extended Kalman Filter (EKF), some of its variants and the standard Unscented Kalman Filter (UKF), to
the estimation of a continuous spatio-temporal field that is spread over a wide area, and hence represented
by a large number of parameters when parameterized. We couple these filters with the powerful scheme of
adaptive sampling performed by a single mobile sensor, and investigate their performances with a view to
significantly improving the speed and accuracy of the overall field estimation. An extensive simulation work
was carried out to show that different variants of the standard EKF and the standard UKF can be used to
improve the accuracy of the field estimate. This paper also aims to provide some guideline for the user of
these filters in reaching a practical trade-off between the desired field estimation accuracy and the
required computational load.
Experimental Testing of a Real-Time Implementation of a PMU-Based Wide-Area D...Power System Operation
The modern power grid is being used under operating conditions of increasing stress, giving
rise to grid stability issues. One of these stability issues is the phenomenon of inter-area oscillations.
Simulations have demonstrated the advantages of Wide-area Measurement Signals (WAMS)-based Oscillation Damping Controls in achieving improved electromechanical mode damping compared to traditional,
local signal-based Power System Stabilizers (PSS). This work takes an existing Phasor-based oscillation
damping (POD) algorithm and uses it to implement a proof-of-concept, wide-area, real-time controller
on National Instruments hardware. The developed prototype is tested in a real-time Hardware-in-theloop setup (RT-HIL) using OPAL-RT’s eMEGASIM real-time simulation platform and synchrophasor data
from actual Phasor Measurement Units (PMUs). The prototype and experiments provide insight into the
feasibility and real-world limitations of wide-area controls. Further, it is demonstrated how the proposed
control architecture has applications independent of the controlled power system device. Challenges faced,
the solutions implemented together with the present prototype’s limitations are also discussed.
In our project, we propose a novel architecture which generates the test patterns with reduced switching activities. LP-TPG (Test pattern Generator) structure consists of modified low power linear feedback shift register (LP-LFSR), m-bit counter; gray counter, NOR-gate structure and XOR-array. The m-bit counter is initialized with Zeros and which generates 2m test patterns in sequence. The m-bit counter and gray code generator are controlled by common clock signal [CLK]. The output of m-bit counter is applied as input to gray code generator and NOR-gate structure. When all the bits of counter output are Zero, the NOR-gate output is one. Only when the NOR-gate output is one, the clock signal is applied to activate the LP-LFSR which generates the next seed. The seed generated from LP-LFSR is Exclusive–OR ed with the data generated from gray code generator. The patterns generated from the Exclusive–OR array are the final output patterns. The proposed architecture is simulated using Modelsim and synthesized using Xilinx ISE 13.2 and it will be implemented on XC3S500e Spartan 3E FPGA board for hardware implementation and testing. The Xilinx Chip scope tool will be used to test the FPGA inside results while the logic running on FPGA.
Computational Performance of Phase Field Calculations using a Matrix-Free (Su...Stephen DeWitt
Comparison of the performance of the PRISMS-PF finite element phase field code vs. a standard finite difference code. Performance is compared for an Ostwald Ripening test case and PFHub Benchmark Problem #7b (MMS Allen-Cahn). These tests demonstrate that PRISMS-PF is several times faster than a standard finite difference code.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
HSO: A Hybrid Swarm Optimization Algorithm for Reducing Energy Consumption in...TELKOMNIKA JOURNAL
Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made.
Enhancing radial distribution system performance by optimal placement of DST...IJECEIAES
In this paper, A novel modified optimization method was used to find the optimal location and size for placing distribution Static Compensator in the radial distribution test feeder in order to improve its performance by minimizing the total power losses of the test feeder, enhancing the voltage profile and reducing the costs. The modified grey wolf optimization algorithm is used for the first time to solve this kind of optimization problem. An objective function was developed to study the radial distribution system included total power loss of the system and costs due to power loss in system. The proposed method is applied to two different test distribution feeders (33 bus and 69 bus test systems) using different Dstatcom sizes and the acquired results were analyzed and compared to other recent optimization methods applied to the same test feeders to ensure the effectiveness of the used method and its superiority over other recent optimization mehods. The major findings from obtained results that the applied technique found the most minimized total power loss in system, the best improved voltage profile and most reduction in costs due power loss compared to other methods.
WIND SPEED & POWER FORECASTING USING ARTIFICIAL NEURAL NETWORK (NARX) FOR NEW...Journal For Research
Continuous Depleting conventional fuel reserves and its impact as increasing global warming concerns have diverted world attention towards non-conventional energy sources. Out of different non-conventional energy sources wind energy can be consider as one of the cleanest source with minimum possible pollution or harmful emissions and has the potential to decrease the relying on conventional energy sources. Today Wind energy can play a vital role to meet our energy demands; however, it faces various issues such as intermittent nature and frequency instability. To reduce such issues the knowledge of futuristic weather conditions and wind speed trend are required. This work mainly describes the implementation of NARX Artificial neural network for wind speed & power forecasting with the help of historical data available from wind farms.
Graph-Based Technique for Extracting Keyphrases In a Single-Document (GTEK)Mahmoud Alfarra
This paper is the best one in the DataMining session in the ICPET 2018.
Paper about:
Graph-based Technique for Extracting Keyphrases in a single document (GTEK) is introduced.
GTEK is based on the graph-based representation of text.
GTEK motivated by:
A phrase may be important if it appears in the most important sentences in the document.
The Most important KP must cover all sub-topics of document.
GTEK groups the sentences into graph-model clusters. Then ranks them using TextRank algorithm.
Finally, the most frequent phrases in the high ranked sentences are selected as document keyphrases.
Experimental results show that GTEK extracts the most keyphrases of two datasets.
Proposing a scheduling algorithm to balance the time and cost using a genetic...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling.
In this paper, a combination of genetic algorithms and binary gravitational attraction is used for scheduling problem solving, where the
reduction in the duty performance timing and cost-effective use of simultaneous resources are investigated. In this case, the user
determines the execution time parameter and cost-effective use of resources. In this algorithm, a new approach that has led to a
balanced load of resources is used in the selection of resources. Experimental results reveals that our proposed algorithm in terms of
cost-time and selection of the best resource has reached better results than other algorithm.
Design of accumulator Based 3-Weight Pattern Generation using LP-LSFRIOSR Journals
Abstract: The objective of the BIST is to reduce power dissipation without affecting the fault coverage. Weighted pseudorandom built-in self - test (BIST) schemes have been utilized in order to drive down the number of vectors to achieve complete fault coverage in BIST applications. Weighted sets comprising three weights, namely 0, 1, and 0.5 have been successfully utilized so far for test pattern generation, since they result in both low testing time and low consumed power. In this approach, the single input change patterns generated by a counter and a gray code generator are Exclusive–ORed with the seed generated by the low power linear feedback shift register [LP-LFSR]. Since accumulators are commonly found in current VLSI chips, this scheme can be efficiently utilized to drive down the hardware of BIST pattern generation, as well. From the implementation results, it is verified that the testing power for the proposed method is reduced by a significant percentage. Keywords: Built-in self- test (BIST), test per clock, VLSI testing, weighted test pattern generation, low power linear feedback shift register [LP-LFSR].
Vlsi Design of Low Transition Low Power Test Pattern Generator Using Fault Co...iosrjce
Now a day’s highly integrated multi layer board with IC’s is virtually impossible to be accessed
physically for testing. The major problem detected during testing a circuit includes test generation and gate to
I/O pin problems. In design of any circuit, consuming low power and less hardware utilization is an important
design parameter. Therefore reliable testing methods are introduced which reduces the cost of the hardware
required and also power consumed by the device. In this project a new fault coverage test pattern generator is
generated using a linear feedback shift register called FC-LFSR which can perform fault analysis and reduces
the total power of the circuit. In this test, it generates three intermediate patterns between the random patterns
which reduces the transitional activities of primary inputs so that the switching activities inside the circuit under
test will be reduced. The test patterns generated are applied to c17 benchmark circuit, whose results with fault
coverage of the circuit being tested. The simulation for this design is performed using Xilinx ISE software using
Verilog hardware description language
Artificial Neural Network Model for Compressive Strength of Lateritic BlocksIJAEMSJORNAL
Lateritic soil are locally abundant and relatively cheap to be used for block production. Its use has gone a long way in reducing the cost of block production and construction work in general. In order to optimize the usefulness of lateritic soil, there is need to model the properties of lateritic blocks. Compressive strength is an important property of lateritic block that must be known, but it cannot be guessed easily due to the block mix proportion and processes. Statistical models used in predicting the properties of lateritic blocks operate on restricted range of data. That is, the model cannot predict input data that are outside the range of data used in developing the model. The need for a model that can predict the compressive strength of lateritic blocks for any given mix ratio became necessary. This study developed Artificial Neural Network model for predicting the compressive strength of lateritic blocks. Lateritic blocks were produced with mix ratios ranging from 1:4 to 1:12. The blocks were cured for 7, 14 and 28 days. The 28th day experimental results and results obtained from literatures on similar works were used to formulate the model. The test data were a total of 155 samples.The maximum compressive strength predicted by the model was 3.06 N/mm^2 corresponding to a mix ratio of 0.4:1:4 of water-cement ratio, cement and lateritic soil. The model accuracy was tested using Fisher test. The result of the Fisher test computations obtained 1.008 for calculated F and 3.5 for F obtained from the table. Hence the model satisfied the test. The model result also compares favourably with the experimental result.
Multi-objective Optimization Scheme for PID-Controlled DC MotorIAES-IJPEDS
DC Motor is the most basic electro-mechanical equipment and well-known for its merit and simplicity. The performance of DC motor is assessed based on several qualities that are most-likely contradictory each other, i.e. settling time and overshoot percentage. Most of controller’s optimization problems are multi-objective in nature since they normally have several conflicting objectives that must be met simultaneously. In this study, the grey relational analysis (GRA) was combined with Taguchi method to search the optimum PID parameter for multi-objective problem. First, a L9 (33) orthogonal array was used to plan out the processing parameters that would affect the DC motor’s speed. Then GRA was applied to overcome the drawback of single quality characteristics in the Taguchi method, and then the optimized PID parameter combination was obtained for multiple quality characteristics from the response table and the response graph from GRA. Signal-to-noise ratio (S/N ratio) calculation and analysis of variance (ANOVA) would be performed to find out the significant factors. Lastly, the reliability and reproducibility of the experiment was verified by confirming a confidence interval (CI) of 95%.
Tdtd-Edr: Time Orient Delay Tolerant Density Estimation Technique Based Data ...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Learning Sparse Networks using Targeted DropoutSeunghyun Hwang
Review : Learning Sparse Networks using Targeted Dropout
- by Seunghyun Hwang (Yonsei University, Severance Hospital, Center for Clinical Data Science)
The efficacy of neural network (NN) and partial least squares (PLS) methods is compared for the prediction of NMR chemical shifts for both 1H and 13C nuclei using very large databases containing millions of chemical shifts. The chemical structure description scheme used in this work is based on individual atoms rather than functional groups. The performances of each of the methods were optimized in a systematic manner described in this work. Both of the methods, least squares and neural network analysis, produce results of a very similar quality but the least squares algorithm is approximately 2-3 times faster.
Prediction of Extreme Wind Speed Using Artificial Neural Network ApproachScientific Review SR
Prediction of an accurate wind speed of wind farms is necessary because of the intermittent nature
of wind for any region. Number of methods such as persistence, physical, statistical, spatial correlation, artificial
intelligence network and hybrid are generally available for prediction of wind speed. In this paper, ANN based
methods viz., Multi Layer Perceptron (MLP) and Radial Basis Function (RBF) neural networks are used. The
performance of the networks applied for prediction of wind speed is evaluated by model performance indicators
viz., Correlation Coefficient (CC), Model Efficiency (MEF) and Mean Absolute Percentage Error (MAPE).
Meteorological parameters such as maximum and minimum temperature, air pressure, solar radiation and
altitude are considered as input units for MLP and RBF networks to predict the extreme wind speed at Delhi.
The study shows the values of CC, MEF and MAPE between the observed and predicted wind speed (using
MLP) are computed as 0.992, 95.4% and 4.3% respectively while training the network data. For RBF network,
the values of CC, MEF and MAPE are computed as 0.992, 95.9% and 3.0% respectively. The model
performance analysis indicates the RBF is better suited network among two different networks studied for
prediction of extreme wind speed at Delhi.
In our project, we propose a novel architecture which generates the test patterns with reduced switching activities. LP-TPG (Test pattern Generator) structure consists of modified low power linear feedback shift register (LP-LFSR), m-bit counter; gray counter, NOR-gate structure and XOR-array. The m-bit counter is initialized with Zeros and which generates 2m test patterns in sequence. The m-bit counter and gray code generator are controlled by common clock signal [CLK]. The output of m-bit counter is applied as input to gray code generator and NOR-gate structure. When all the bits of counter output are Zero, the NOR-gate output is one. Only when the NOR-gate output is one, the clock signal is applied to activate the LP-LFSR which generates the next seed. The seed generated from LP-LFSR is Exclusive–OR ed with the data generated from gray code generator. The patterns generated from the Exclusive–OR array are the final output patterns. The proposed architecture is simulated using Modelsim and synthesized using Xilinx ISE 13.2 and it will be implemented on XC3S500e Spartan 3E FPGA board for hardware implementation and testing. The Xilinx Chip scope tool will be used to test the FPGA inside results while the logic running on FPGA.
Computational Performance of Phase Field Calculations using a Matrix-Free (Su...Stephen DeWitt
Comparison of the performance of the PRISMS-PF finite element phase field code vs. a standard finite difference code. Performance is compared for an Ostwald Ripening test case and PFHub Benchmark Problem #7b (MMS Allen-Cahn). These tests demonstrate that PRISMS-PF is several times faster than a standard finite difference code.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
HSO: A Hybrid Swarm Optimization Algorithm for Reducing Energy Consumption in...TELKOMNIKA JOURNAL
Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made.
Enhancing radial distribution system performance by optimal placement of DST...IJECEIAES
In this paper, A novel modified optimization method was used to find the optimal location and size for placing distribution Static Compensator in the radial distribution test feeder in order to improve its performance by minimizing the total power losses of the test feeder, enhancing the voltage profile and reducing the costs. The modified grey wolf optimization algorithm is used for the first time to solve this kind of optimization problem. An objective function was developed to study the radial distribution system included total power loss of the system and costs due to power loss in system. The proposed method is applied to two different test distribution feeders (33 bus and 69 bus test systems) using different Dstatcom sizes and the acquired results were analyzed and compared to other recent optimization methods applied to the same test feeders to ensure the effectiveness of the used method and its superiority over other recent optimization mehods. The major findings from obtained results that the applied technique found the most minimized total power loss in system, the best improved voltage profile and most reduction in costs due power loss compared to other methods.
WIND SPEED & POWER FORECASTING USING ARTIFICIAL NEURAL NETWORK (NARX) FOR NEW...Journal For Research
Continuous Depleting conventional fuel reserves and its impact as increasing global warming concerns have diverted world attention towards non-conventional energy sources. Out of different non-conventional energy sources wind energy can be consider as one of the cleanest source with minimum possible pollution or harmful emissions and has the potential to decrease the relying on conventional energy sources. Today Wind energy can play a vital role to meet our energy demands; however, it faces various issues such as intermittent nature and frequency instability. To reduce such issues the knowledge of futuristic weather conditions and wind speed trend are required. This work mainly describes the implementation of NARX Artificial neural network for wind speed & power forecasting with the help of historical data available from wind farms.
Graph-Based Technique for Extracting Keyphrases In a Single-Document (GTEK)Mahmoud Alfarra
This paper is the best one in the DataMining session in the ICPET 2018.
Paper about:
Graph-based Technique for Extracting Keyphrases in a single document (GTEK) is introduced.
GTEK is based on the graph-based representation of text.
GTEK motivated by:
A phrase may be important if it appears in the most important sentences in the document.
The Most important KP must cover all sub-topics of document.
GTEK groups the sentences into graph-model clusters. Then ranks them using TextRank algorithm.
Finally, the most frequent phrases in the high ranked sentences are selected as document keyphrases.
Experimental results show that GTEK extracts the most keyphrases of two datasets.
Proposing a scheduling algorithm to balance the time and cost using a genetic...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling.
In this paper, a combination of genetic algorithms and binary gravitational attraction is used for scheduling problem solving, where the
reduction in the duty performance timing and cost-effective use of simultaneous resources are investigated. In this case, the user
determines the execution time parameter and cost-effective use of resources. In this algorithm, a new approach that has led to a
balanced load of resources is used in the selection of resources. Experimental results reveals that our proposed algorithm in terms of
cost-time and selection of the best resource has reached better results than other algorithm.
Design of accumulator Based 3-Weight Pattern Generation using LP-LSFRIOSR Journals
Abstract: The objective of the BIST is to reduce power dissipation without affecting the fault coverage. Weighted pseudorandom built-in self - test (BIST) schemes have been utilized in order to drive down the number of vectors to achieve complete fault coverage in BIST applications. Weighted sets comprising three weights, namely 0, 1, and 0.5 have been successfully utilized so far for test pattern generation, since they result in both low testing time and low consumed power. In this approach, the single input change patterns generated by a counter and a gray code generator are Exclusive–ORed with the seed generated by the low power linear feedback shift register [LP-LFSR]. Since accumulators are commonly found in current VLSI chips, this scheme can be efficiently utilized to drive down the hardware of BIST pattern generation, as well. From the implementation results, it is verified that the testing power for the proposed method is reduced by a significant percentage. Keywords: Built-in self- test (BIST), test per clock, VLSI testing, weighted test pattern generation, low power linear feedback shift register [LP-LFSR].
Vlsi Design of Low Transition Low Power Test Pattern Generator Using Fault Co...iosrjce
Now a day’s highly integrated multi layer board with IC’s is virtually impossible to be accessed
physically for testing. The major problem detected during testing a circuit includes test generation and gate to
I/O pin problems. In design of any circuit, consuming low power and less hardware utilization is an important
design parameter. Therefore reliable testing methods are introduced which reduces the cost of the hardware
required and also power consumed by the device. In this project a new fault coverage test pattern generator is
generated using a linear feedback shift register called FC-LFSR which can perform fault analysis and reduces
the total power of the circuit. In this test, it generates three intermediate patterns between the random patterns
which reduces the transitional activities of primary inputs so that the switching activities inside the circuit under
test will be reduced. The test patterns generated are applied to c17 benchmark circuit, whose results with fault
coverage of the circuit being tested. The simulation for this design is performed using Xilinx ISE software using
Verilog hardware description language
Artificial Neural Network Model for Compressive Strength of Lateritic BlocksIJAEMSJORNAL
Lateritic soil are locally abundant and relatively cheap to be used for block production. Its use has gone a long way in reducing the cost of block production and construction work in general. In order to optimize the usefulness of lateritic soil, there is need to model the properties of lateritic blocks. Compressive strength is an important property of lateritic block that must be known, but it cannot be guessed easily due to the block mix proportion and processes. Statistical models used in predicting the properties of lateritic blocks operate on restricted range of data. That is, the model cannot predict input data that are outside the range of data used in developing the model. The need for a model that can predict the compressive strength of lateritic blocks for any given mix ratio became necessary. This study developed Artificial Neural Network model for predicting the compressive strength of lateritic blocks. Lateritic blocks were produced with mix ratios ranging from 1:4 to 1:12. The blocks were cured for 7, 14 and 28 days. The 28th day experimental results and results obtained from literatures on similar works were used to formulate the model. The test data were a total of 155 samples.The maximum compressive strength predicted by the model was 3.06 N/mm^2 corresponding to a mix ratio of 0.4:1:4 of water-cement ratio, cement and lateritic soil. The model accuracy was tested using Fisher test. The result of the Fisher test computations obtained 1.008 for calculated F and 3.5 for F obtained from the table. Hence the model satisfied the test. The model result also compares favourably with the experimental result.
Multi-objective Optimization Scheme for PID-Controlled DC MotorIAES-IJPEDS
DC Motor is the most basic electro-mechanical equipment and well-known for its merit and simplicity. The performance of DC motor is assessed based on several qualities that are most-likely contradictory each other, i.e. settling time and overshoot percentage. Most of controller’s optimization problems are multi-objective in nature since they normally have several conflicting objectives that must be met simultaneously. In this study, the grey relational analysis (GRA) was combined with Taguchi method to search the optimum PID parameter for multi-objective problem. First, a L9 (33) orthogonal array was used to plan out the processing parameters that would affect the DC motor’s speed. Then GRA was applied to overcome the drawback of single quality characteristics in the Taguchi method, and then the optimized PID parameter combination was obtained for multiple quality characteristics from the response table and the response graph from GRA. Signal-to-noise ratio (S/N ratio) calculation and analysis of variance (ANOVA) would be performed to find out the significant factors. Lastly, the reliability and reproducibility of the experiment was verified by confirming a confidence interval (CI) of 95%.
Tdtd-Edr: Time Orient Delay Tolerant Density Estimation Technique Based Data ...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Learning Sparse Networks using Targeted DropoutSeunghyun Hwang
Review : Learning Sparse Networks using Targeted Dropout
- by Seunghyun Hwang (Yonsei University, Severance Hospital, Center for Clinical Data Science)
The efficacy of neural network (NN) and partial least squares (PLS) methods is compared for the prediction of NMR chemical shifts for both 1H and 13C nuclei using very large databases containing millions of chemical shifts. The chemical structure description scheme used in this work is based on individual atoms rather than functional groups. The performances of each of the methods were optimized in a systematic manner described in this work. Both of the methods, least squares and neural network analysis, produce results of a very similar quality but the least squares algorithm is approximately 2-3 times faster.
Prediction of Extreme Wind Speed Using Artificial Neural Network ApproachScientific Review SR
Prediction of an accurate wind speed of wind farms is necessary because of the intermittent nature
of wind for any region. Number of methods such as persistence, physical, statistical, spatial correlation, artificial
intelligence network and hybrid are generally available for prediction of wind speed. In this paper, ANN based
methods viz., Multi Layer Perceptron (MLP) and Radial Basis Function (RBF) neural networks are used. The
performance of the networks applied for prediction of wind speed is evaluated by model performance indicators
viz., Correlation Coefficient (CC), Model Efficiency (MEF) and Mean Absolute Percentage Error (MAPE).
Meteorological parameters such as maximum and minimum temperature, air pressure, solar radiation and
altitude are considered as input units for MLP and RBF networks to predict the extreme wind speed at Delhi.
The study shows the values of CC, MEF and MAPE between the observed and predicted wind speed (using
MLP) are computed as 0.992, 95.4% and 4.3% respectively while training the network data. For RBF network,
the values of CC, MEF and MAPE are computed as 0.992, 95.9% and 3.0% respectively. The model
performance analysis indicates the RBF is better suited network among two different networks studied for
prediction of extreme wind speed at Delhi.
Technologies, Strategies And Algorithm In Green Computing – Solution To Energ...IJERA Editor
A safe and non-polluted environment is the basic need of a living being. But today the situation is getting changed. Our environment is getting polluted day by day at a very high rate. The use of computing devices plays a vital role in harmfulness of environment. To reduce these harmful impacts the concept of Green Computing must be implemented. In this research paper, we includes some technologies, strategies and algorithm which are used for the implementation of green computing. The main reason is the awareness of a common user. If a common user is getting aware about the harmful impacts of use of computing devices over environment and takes some steps at own level to reduce electricity, the concept of green computing will be implemented.
Studies on Strength Evaluation of Fiber Reinforced Plastic CompositesIJERA Editor
Fiber Reinforced Polymer (FRP) composites are extensively used for primary structural components such as wing, empennage and fuselage; and sub-structures such as wing ribs and intermediate spars in new generation aircraft as they give rise to high stiffness and strength to weight ratio. The failure load predictions of such composites are extremely important in order to ascertain the flight safety during its service periods. The stress analysis is a part of failure prediction process, since the failure criterion, in order to predict failure load, requires information about stresses and strains in a structure. In the present investigation, the stress analyses of CFRP composite laminates with and without cut-outs have been carried out by using both analytical and finite element approaches. In analytical approach, a mat lab code has been developed for a flat panel using Classical Laminated Plate Theory (CLPT) and different composite failure theories. MSC.NASTRAN finite element analysis code is used for carrying out finite element analysis. Convergence study has been carried out for the flat composite panel in order to ascertain the best mesh size. Comparison of stress and strain values obtained from both analytical and finite element methods shows that they are in good agreement for flat panel. This further validates the best mesh sizes obtained from the convergence study. This similar mesh sizes are further considered for flat panel with circular and elliptical cut-outs with some mesh refinements around the cut-out regions. Failure load of the flat composite laminate (without cut-out) is determined using four different failure criteria such as maximum stress, maximum strain, Tsai-Hill and Tsai-Wu criteria. The predicted values are compared with experimental results. It is found that the most appropriate theory is Tsai-Wu failure criterion, since the predicted value based on this theory is very closure to experimental failure loads. This theory is used further for predicting the failure loads of composite laminates with cut-outs. The average value of stresses in each lamina has been used for determining the failure indices of the lamina for such cases. The results are compared with experimental failure loads available in the literature. The comparison shows that they are in very good agreement. Tsai-Wu failure criterion best predicts the failure load of a composite laminate with and without cut-outs.
On Semi-Invariant Submanifolds of a Nearly Hyperbolic Kenmotsu Manifold with ...IJERA Editor
We consider a nearly hyperbolic Kenmotsu manifold admitting a semi-symmetric metric connection and study semi-invariant submanifolds of a nearly hyperbolic Kenmotsu manifold with semi-symmetric metric connection. We also find the integrability conditions of some distributions on nearly hyperbolic Kenmotsu manifold andstudy parallel distributions on nearly hyperbolic Kenmotsu manifold.
Direct Torque Control of Induction Motor Drive Fed from a Photovoltaic Multil...IJERA Editor
This paper presents Direct Torque Control (DTC) using Space Vector Modulation (SVM) for an induction motor drive fed from a photovoltaic multilevel inverter (PV-MLI). The system consists of two main parts PV DC power supply (PVDC) and MLI. The PVDC is used to generate DC isolated sources with certain ratios suitable for the adopted MLI. Beside the hardware system, the control system which uses the torque and speed estimation to control the load angle and to obtain the appropriate flux vector trajectory from which the voltage vector is directly derived based on direct torque control methods. The voltage vector is then generated by a hybrid multilevel inverter by employing space vector modulation (SVM). The inverter high quality output voltage which leads to a high quality IM performances. Besides, the MLI switching losses is very low due to most of the power cell switches are operating at nearly fundamental frequency. Some selected simulation results are presented for system validation.
AN EFFICIENT ALGORITHM FOR WRAPPER AND TAM CO-OPTIMIZATION TO REDUCE TEST APP...IAEME Publication
System-on-Chip (SOC) designs composed of many embedded cores are ubiquitous in today’s integrated circuits. Each of these cores requires to be tested separately after manufacturing of the SoC. That’s why, modular testing is adopted for core-based SoCs, as it promotes test reuse and permits the cores to be tested without comprehensive knowledge about their internal structural details. Such modular testing triggers the need of a special test access mechanism (TAM) to build communication between core I/Os and TAM and promises to minimize overall test time. In this paper, various issues are analyzed to optimize the Wrapper and TAM, which comprises the optimal partitioning of TAM width, assignment of cores to partitioned TAM width etc.
Reduced Test Pattern Generation of Multiple SIC Vectors with Input and Output...IJERA Editor
In recent years, the design for low power has become one of the greatest challenges in high-performance very
large scale integration (VLSI) design. Most of the methods focus on the power consumption during normal mode
operation, while test mode operation has not normally been a predominant concern. However, it has been found
that the power consumed during test mode operation is often much higher than during normal mode operation
[1]. This is because most of the consumed power results from the switching activity in the nodes of the circuit
under test (CUT), which is much higher during test mode than during normal mode operation [1]–[3]. In the
proposed pattern, each generated vector applied to each scan chain is an SIC vector, which can minimize the
input transition and reduce test power. In VLSI testing, power reduction is achieved by increasing the correlation
between consecutive test patterns.
Techniques for Minimizing Power Consumption in DFT during Scan Test ActivityIJTET Journal
Lessening in test force is vital to enhance battery lifetime in versatile electronic gadgets utilizing intermittent individual test. It's to expand dependability of testing, and to lessen test expense. A conservative test set with exceedingly viable examples, every identifying different issues, is attractive for lower test expenses. Such examples build exchanging action amid dispatch and catch operations. In this paper, we exhibit a novel circuit strategy to essentially dispense with test force dissemination in combinational logic by veiling sign moves at the logic inputs amid sweep moving is exhibited. We execute the concealing impact by embeddings an additional supply gating transistor in the supply to ground way for the first-level doors at the yields of the output flip-flops. The gating transistor supply is killed in the output in mode, basically gating the supply. Further, DFT punishments are decreased by embracing specific trigger Scan structural planning. This building design diminishes exchanging action in the circuit-under-test (CUT) and builds the clock recurrence of the checking methodology. The assistant chain moves in the contrast between sequential test vectors and just the obliged moves (alluded to as trigger information) are connected. Power necessities are significantly decreased by the utilization of a two-stage heuristic technique. Utilizing ISCAS 89 benchmark circuits, this adequacy is to enhance SoC test measures (power, time, and information volume) is tentatively assessed and affirmed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Clock Gating Cells for Low Power Scan Testing By Dft TechniqueIJERA Editor
This paper presents about minimizing the power consumption by scan testing DFT technique. In Integrated
Circuit technology entire thing depends on floor plan, Power consumption, Timing, and routing. Present days
we improved a lot in all sectors except power consumption to improve the speed of device the clock toggle rate
plays a key role due to that power drop is also more. So in IC technology the unused clock signal are present in
sequential circuits when the data path or data signal arrived late .So in order to know the unused clock signals
we used scan based testing through DFT. After testing that the unused or unwanted clock signals can avoiding
temporarily by placing the clock gating cells by that it decreases and high controllability leads to avoid heating
and power consumption problems.
A Novel Method for Encoding Data Firmness in VLSI CircuitsEditor IJCATR
The number of tests, corresponding test data volume and test time increase with each new fabrication process technology.
Higher circuit densities in system-on-chip (SOC) designs have led to drastic increase in test data volume. Larger test data size demands
not only higher memory requirements, but also an increase in testing power and time. Test data compression method can be used to
solve this problem by reducing the test data volume without affecting the overall system performance. The original test data is
compressed and stored in the memory. Thus, the memory size is significantly reduced. The proposed approach combines the selective
encoding method and dictionary based encoding method that reduces test data volume and test application time for testing. The
experiment is done on combinational benchmark circuit that designed using Tanner tool and the encoding algorithm is implemented
using Model -Sim
The paper presents a technique called as Mobility-enabled Multi Level Optimization (MeMLO) that addressing the existing problem of clustering in wireless sensor net-work (WSN). The technique enables selection of aggregator node based on multiple optimi-zation attribute which gives better decision capability to the clustering mechanism by choosing the best aggregator node. The outcome of the study shows MeMLO is highly capable of minimizing the halt time of mobile node that significantly lowers the transmit power of aggregator node. The simulation outcome shows negligible computational com-plexity, faster response time, and highly energy efficient for large scale WSN for longer simulation rounds as compared to conventional LEACH algorithm.
Optimal power flow with distributed energy sources using whale optimization a...IJECEIAES
Renewable energy generation is increasingly attractive since it is non-polluting and viable. Recently, the technical and economic performance of power system networks has been enhanced by integrating renewable energy sources (RES). This work focuses on the size of solar and wind production by replacing the thermal generation to decrease cost and losses on a big electrical power system. The Weibull and Lognormal probability density functions are used to calculate the deliverable power of wind and solar energy, to be integrated into the power system. Due to the uncertain and intermittent conditions of these sources, their integration complicates the optimal power flow problem. This paper proposes an optimal power flow (OPF) using the whale optimization algorithm (WOA), to solve for the stochastic wind and solar power integrated power system. In this paper, the ideal capacity of RES along with thermal generators has been determined by considering total generation cost as an objective function. The proposed methodology is tested on the IEEE-30 system to ensure its usefulness. Obtained results show the effectiveness of WOA when compared with other algorithms like non-dominated sorting genetic algorithm (NSGA-II), grey wolf optimization (GWO) and particle swarm optimization-GWO (PSOGWO).
Enhanced Skewed Load and Broadside Power Reduction in Transition Fault TestingIJERA Editor
This Paper Proposes the T-algorithm technique to optimize the testing Skewed Load and Broadside architecture. And the architecture used to the compare the test pattern results. In this architecture, T-algorithm used to optimize the testing architecture. This architecture compare the test pattern output for the required any type of combinational architecture. The optimization process mainly focused by gate optimization for secure architecture. The proposed system to use the T-algorithm, to optimize the testing clocking level for the required test patterns. This technique to replace the flip flop and the mux arrangement. To reduce the flip flops in Skewed Load architecture. And to develop the accuracy for testing architecture. The proposed system consists of the secure testing architecture and includes the XOR-gate architecture. So the modification process applied by the Broadside and over all Skewed Load architecture. The proposed technique to check the scanning results for the testing process. The testing architecture mainly used to the error attack for the scanning process and the scanning process work with any type of testing architecture. The scanning process to be secure using the T-algorithm for the Skewed Load architecture. And to develop the testing process for the fault identification process. The diagnosis technique to detect error for the scanning process in any type combinational architecture. The T-algorithm used to reduce the circuit complexity for the testing architecture and the testing architecture used to reduce the delay level. And the future process, this technique used to reduce the gate level for the sticky comparator architecture and to modify the clocking function for the testing process. This technique to develop the accuracy level for the testing process compare to the present methodology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC). Instruction cache has major contribution in improving IPC. Cache memories are realized on the same chip where the processor is running. This considerably increases the system cost as well. Hence, it is required to maintain a trade-off between cache sizes and performance improvement offered. Determining the number of cache lines and size of cache line are important parameters for cache designing. The design space for cache is quite large. It is time taking to execute the given application with different cache sizes on an instruction set simulator (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or nearly best IPC. Cache size is derived, at a higher abstraction level, from basic block analysis in the Low Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out-of-order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache line size, number of cache lines).
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption
and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC).
Instruction cache has major contribu
tion in improving IPC. Cache memories are realized on the same chip
where the processor is running. This considerably increases the system cost as well. Hence, it is required to
maintain a trade
-
off between cache sizes and performance improvement offered.
Determining the number
of cache lines and size of cache line are important parameters for cache designing. The design space for
cache is quite large. It is time taking to execute the given application with different cache sizes on an
instruction set simula
tor (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to
identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or
nearly best IPC. Cache size is derived, at a higher abstract
ion level, from basic block analysis in the Low
Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross
validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out
-
of
-
order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or
estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache
line size, number of cache lines).
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to Test Scheduling of Core Based SOC Using Greedy Algorithm (20)
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Test Scheduling of Core Based SOC Using Greedy Algorithm
1. Naveen Dewan Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 9( Version 3), September 2014, pp.80-85
www.ijera.com 80 | P a g e
Test Scheduling of Core Based SOC Using Greedy Algorithm Naveen Dewan, Harpreet Vohra Department of Electronics and Communication Engineering, Thapar University, Patiala.
Abstract— Escalating increase in the level of integration has led the design engineers to embed the pre-design and pre- verified logic blocks on chip to make a complete system on chip (SoC) technology. This advancing technology trend has led to new challenges for the design and test engineers. To ensure the testability of the entire system, the test planning needs to be done during design phase. To save the test cost, the test application time needs to be reduced which requires the test to be done concurrently. However the parallel running of test of multiple cores increases the power dissipation. This thereby leads to make test optimization to take care of time and power. This paper presents an approach for the scheduling the cores with the test time, power, test access mechanism and bandwidth constraint based on greedy algorithm. The TAM allotment to the various cores is done dynamically to save the test time and utilize the full bandwidth. Scheduling is done on ITC’02 benchmark circuits. Experiments on these ITC’02 benchmark circuits show that this algorithm offers lower test application time compared to the multiple constraint driven system-on-chip.
Keywords— SoC testing, test scheduling, test bandwidth, power constraint
I. INTRODUCTION
The Advancement in design methodologies and semiconductor process technologies has led to the development of systems with excessive functionality implemented on a single die, called system-on-chip. A set of predesigned and pre-verified design modules in the form of hard, soft or firm cores brought from either are integrated into a system using user-defined logic (UDL) and interconnects. We can implement complex systems having digital, analog and mixed signal components. The urgent time to market requirement poses many challenges for the design and test engineers. The associated test cost has become the major bottleneck in the reduction of overall cost of system[23]. Testing cost have made IC testing more difficult. ITRS semiconductor roadmap [17] represents that there will be a need of hundred of processors for the future generation of SoC designs which will further increase the test cost. Testing of SoC is costly due to large data volume introduced due to increase in the integration and interconnection intricacies, huge power dissipation during test, expensive test generation procedures , heterogeneous mix of cores and their long test application times. Many techniques have been proposed to reduce the cost by test scheduling, reducing test data volume and optimizing test design mechanism. Test generation can either be done off-chip by employing ATPG (Automatic test pattern generation) algorithms running on expensive automatic test equipments or on-chip using a built-in hardware called BIST (Built In Self Test) [15]. BIST offers the benefit in case if on-chip TAM availability is less. However BIST ready cores are not always available, also the multi
site testing of SoC for test time reduction makes the ATE more promising. For the test access and application Zorian et al. [24][25] proposed a modular approach. It comprises of wrapper design [4][27][28][20], TAM [21][22][29][30]and test scheduling [2][18][19][26]. TAM optimization and test scheduling have been the integral part of the research and test optimization for past three decades. Test scheduling has been proved to be an NP-hard problem. This paper proposes a greedy algorithm based approach for test scheduling to reduce the test time subject to test power and bandwidth constraint. We can reduce the problem into a rectangle packing problem [3]. Experimental results for ITC’02 benchmark circuits show the optimal results achieved. Also a comparison with Pouget et al.[4] shows to be a better approach. This paper also includes the background of the SoC test scheduling based papers.
II. LITERATURE REVIEW
Concurrently testing a core based system accelerates the speed of testing. An efficient schedule can reduce the overall test time. Several works have been proposed on test scheduling using various algorithms. Pouget et al.[4] proposed a test scheduling technique with the objective to minimize the test application time while considering multiple resource conflicts. The conflicts are testing of interconnections between the cores, module testing with multiple test sets, sharing of the TAM and test power conflicts. Wrapper design algorithm and test scheduling heuristic algorithm is used to calculate the test time. Further, calculation of the all Pareto optimal points for each core and Optimal Time has been
RESEARCH ARTICLE OPEN ACCESS
2. Naveen Dewan Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 9( Version 3), September 2014, pp.80-85
www.ijera.com 81 | P a g e
calculated for each core from these Pareto optimal points. Considering all conflicts and optimal points the scheduling is done. Goel et al. [5] have proposed two approaches for efficient testing of SoCs with hierarchical cores. In the first approach the problem is solved by wrapper design this approach leaves full flexibility for TAM optimization and test scheduling. The second approach is based on a modified wrapper design for parent cores that operate in two disjoint modes for testing of parent and child cores. The first approach gives lower test application times, while second approach offers less area costs. The ΔT is given as the change in total application time of modified wrapper cell with respect to flat core scheduling which is 0 to 2 percent. So with modified wrapper cells hierarchical cores can be tested with minimum test application time. Power optimization is required in test scheduling so in Larsson et al. [6] the concurrent test application leads to higher activity during the testing, hence the power consumption is higher. The power consumed during concurrent testing is higher than normal operation in order to maximize the number of tested faults in a minimal time. A system under test can be damaged so the power constraint must be considered. In this paper three level power model is proposed i.e. system, power grid and core. The advantage is that the system level power budget is met and hotspots can be avoided at a specific core and at hotspot areas in the chip. The results from the experiment shows that by new design and test alternatives total test cost can be reduced. The proposed technique produces results that are near the ones produced by the pseudo- exhaustive technique at computational costs that are near the costs of the estimation based technique.
There are different types of algorithms and techniques used and some of them are explained in [7-10][12][13]. In Harmanani et al. [7] presented an power constrained efficient approach for the test scheduling problem of core-based systems based on genetic algorithm. The method minimizes the test application time through compact test schedules. In genetic core test scheduling formulation there is chromosomal representation, selection and reproduction, genetic operators (mutation, crossover and fill gap). During every generation, chromosomes are selected for reproduction, resulting in new test schedules. The mutation operator uses a constructive approach that minimizes the generation unfeasible test schedules. Ahn et al. [8] a SoC test scheduling method based on an ant colony optimization algorithm. The algorithm formulates the SoC test scheduling problem as a rectangle bin packing problem and uses ACO to cover more solution space to increase the probability of finding optimal solutions. Before beginning the scheduling there is need to design the test wrappers for embedded cores and found the Pareto-optimal. In [9] genetic algorithm based approach is considered for TAM optimization. Different data rates for ATE channels are used to reduce the test time. Ant colony optimization algorithm based approach is considered in [10]. This is a technique to find good paths through graphs. In [11] and [14] temperature constraint is considered for test scheduling.
III. PROPOSED ALGORITHM
The SoC has three types of cores are combinational cores, sequential cores and embedded memory cores. The core which, is Built in self tested assumed to have one unit time. The no of inputs of individual core is called the bandwidth of the core. The total bandwidth is the limited test access mechanism busses available. The total power is the power available for testing for the SoC. Using greedy schedule we may not get the minimum test time so the schedule is heuristically improved for time minimization. The pictorial representation of a schedule is given in Fig. 1. The y- axis of each rectangle represents the bandwidth of a particular core and the x-axis represents the test time for that core. The maximum power and maximum bandwidth in Fig. 1 is 12 and 10 respectively. So at a particular instance the power (p) and bandwidth should not exceed the maximum value. The cores should be closely bounded to get the minimum test time. The cores with bandwidth and power lesser than the total bandwidth and total power can only be tested using this algorithm. The algorithm greedily arranges the cores with respect to their bandwidth and schedules the cores with given total bandwidth and total power. The total test time (TTT) can be calculated and stored. Then another schedule is formed by re-arranging the cores e.g. with respect to test time of cores or power of the cores. The new TTT can be compared and the lowest TTT is considered to be the best schedule.
Figure 1: Representation of a schedule consisting of eight cores.
3. Naveen Dewan Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 9( Version 3), September 2014, pp.80-85
www.ijera.com 82 | P a g e
Algorithm: INPUT: 1. N: total no of cores. 2. maxBW: total bandwidth available. 3. maxP: total power available. 4. Core set: a set of cores, for each core (i) number of core, (ii) bandwidth of the core, (iii) max power consumption of the core, (iv) test time of the core, (v) integer u :- to check whether the core has been scheduled yet and to check whether this core has minimum time in the scheduled cores. (vi) the start time of the core . Output: 1. Core set: for each core
(i) the test end time of core.
(ii) Integer u value to check whether every core is tested or not.
2. total test time of the schedule.
BEGIN Get the inputs N, Core set, maxBW, maxP. Arrange the Core set with decreasing BW and decreasing P, decreasing Power and decreasing test time or any other arrangement possible. Set the values of start time (ST) = 0; remBW = maxBW; remP = maxP; integer u = 0; integer temp = 1000000000; integer t=0; for i=0 to N-1 do { for i=0 to N-1 do {Select the core with integer u = 0, which has the required bandwidth and power from the arranged cores. Update the remBW, remP, integer u=1, start time of the core = ST and
test time of core = ST + test time of core } for i=0 to N-1 do {if integer u = 1 and test time of that core < temp then temp = test time of the core integer t = number of core } for i=0 to N-1 do {if the remP and remBW <= power and bandwidth of the core and if integer u=1 and test time of core = temp then Update bandwidth and power of the core again Set the integer u = 2 } Set integer u = 2 Update the remBW and remP } END
In this Algorithm the test time of the last core selected for scheduling will be updated as the total test time for the whole scheduling process. IV. EXPERIMENTAL RESULTS The proposed algorithm is applied to ITC’02 p93971 and p22810 which are ITC’02 benchmark SoCs. The test application time is calculated while dynamically varying the TAM sizes applied to different cores keeping the total test buswidth constant. The parameters given in Table 2 and 3 represents the module number, core bandwidth or TAM, power consumption of each core and test time of each core. Test time of the each core depends upon the scan chain width. Scan chain length is calculated by adding the number of functional inputs and the total scan chain length then dividing the TAM width required (64, 32 or 16). The test time can be calculated by using equation 1[16]. Test Time = (1+ max (Si, Sout)) TP + min (Si, Sout) (1)
Si is the input scan chain length, Sout is the output scan chain length and TP is the number of test
4. Naveen Dewan Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 9( Version 3), September 2014, pp.80-85
www.ijera.com 83 | P a g e
patterns. Calculating the test time of cores we can apply those values to the algorithm and the result are compared with [4]. Table 4 and Table 5 provides the results of p93971 and p22810 which represents the total test time at TAM widths 64, 80 and 128, power for p93971(Pmax= 30000, 25000 and 10000) and power for p22810 (Pmax= 10000, 6000 and 3000) using the algorithm. Table 6 and Table 7 represent the results for the same values in [4]. These values can be compared which gives heuristically optimal result. Table 2: Test time calculation of p93791
Module
Core Bandwidth (BW)
Power
Core test time
1.
32
7014
91019
2.
16
16
768
3.
16
69
3893
4.
12
225
143
5.
32
248
42895
6.
64
6150
83219
7.
9
41
708
8.
9
41
708
9.
16
77
768
10.
32
395
13968
11.
16
862
8835
12.
32
4634
56447
13.
64
9741
29639
14.
64
9741
29639
15.
16
78
1152
16.
32
201
2376
17.
32
6674
44701
18.
16
113
294
19.
64
5252
16246
20
64
7670
50039
21.
16
113
294
22.
16
76
168
23.
64
7844
29374
24.
17
21
3072
25.
29
45
2688
26.
16
76
384
27.
64
3135
44932
28.
32
159
1584
29
64
6756
18164
30
16
77
768
31
32
218
1224
32
32
396
37008
Table 3: Test time calculation of p22810
Module
Core Bandwidth (BW)
Power
Core test time
1.
16
173
80
2.
16
173
445
3.
28
1238
33011
4.
16
80
61620
5.
16
64
12432
6.
32
112
666
7.
32
2489
15224
8.
32
144
2848
9.
32
148
10528
10.
16
52
7824
11.
64
2505
6687
12.
32
289
389
13.
16
739
3989
14.
32
848
2856
15.
32
487
23
16.
16
115
631
17.
32
580
645
18.
16
237
80
19.
32
442
311
20
32
441
8384
21.
32
167
412
22.
32
318
1385
23.
64
1309
9319
24.
32
260
539
25.
31
363
491
26.
32
311
279
27.
32
2512
15551
28.
64
2921
33123
29
32
413
32
30
32
508
431
Table 4: Scheduling on p93791 using proposed algorithm
TAM Width
Pmax= 30000 Test Time
Pmax= 25000 Test Time
Pmax= 10000 Test Time
128
228718
228718
432241
80
449134
449134
493419
64
454711
454711
493419
Table 6: Scheduling on p93791 using [4].
TAM Width
Pmax= 30000 Test Time
Pmax= 25000 Test Time
Pmax= 10000 Test Time
128
457862
493599
568734
80
787588
821475
1091210
64
945425
965383
1117385
Table 5: Scheduling on p22810 using proposed algorithm.
TAM Width
Pmax= 10000 Test Time
Pmax= 6000 Test Time
Pmax= 3000 Test Time
128
61620
68307
96909
80
98133
98133
115194
64
127018
127018
127018
6. Naveen Dewan Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 9( Version 3), September 2014, pp.80-85
www.ijera.com 85 | P a g e
[19] X. Chuan-pei, H. Hong-bo, N Jun-hao, ―Test Scheduling of SOC with Power Constraint Based on Particle Swarm Optimization Algorithm,‖ 2009 Third International Conference on Genetic and Evolutionary Computing. [20] J. Pouget, E. Larsson, Z. Peng, M. Flottes, B. Rouzeyre, ―An Efficient Approach to SoC Wrapper Design, TAM Configuration and Test Scheduling,‖ Eighth IEEE European Test Workshop (ETW’03). [21] H. M. Harmanani and R. Farah, ―Integrating Wrapper Design, TAM Assignment, and Test Scheduling for SOC Test Optimization,‖ 2008 IEEE. [22] S. Koranne, ―Design of Reconfigurable Access Wrappers for Embedded Core Based SoC Test,‖ IEEE transactions on very large scale integretion (VLSI) systems, vol. 11, no. 5, October 2003.
[23] International Technology Roadmap for Semiconductors (ITRS), 2003, http://www.itrs.net/Links/2003ITRS/Home2003.htm [24] Y. Zorian, ―A distributed BIST control scheme for complex VLSI devices,‖ VTS, pp. 6–11,1993. [25] Y. Zorian, E. J. Marinissen and S. Dey,―Testing Embedded-Core-Based System Chips‖, IEEE Computer, 32(6),52-60, June 1999. [26] C. Su And C. Wu,― A Graph-Based Approach to Power-Constrained SOC Test Scheduling‖, Journal Of Electronic Testing: Theory And Applications 20, pp. 45–60, 2004. [27] E. J. Marinissen, S. K. Goel, and M. Lousberg, ―Wrapper Design for Embedded Core Test‖, Proceedings of International Test Conference (ITC), Atlantic City, NJ, USA, pp. 911-920. October 2000. [28] K. Kim and K. K. Saluja,― Low-Area Wrapper Cell Design for Hierarchical SoC Testing‖, Journal of Electron Test , pp. 347- 352, 2009. [29] X. Wu, Y. Chen, K. Chakrabarty , Y. Xie,― Test-access mechanism optimization for core-based three-dimensional SOCs‖, Microelectronics Journal pp. 601–615, 2010. [30] S. K. Goel, E. Marinissen,― SOC Test architecture design for efficient utilization of test bandwidth‖, ACM transactions on design automation of electronic design, vol 8 ,no.4 , pp. 399-429, October 2003.
AUTHORS
First Author – Naveen Dewan, pursuing MTech VLSI, Department of Electronics and Communication Engineering, Thapar University, Patiala. nandydan@gmail.com
Second Author – Harpreet Vohra, Assistant Professor, Department of Electronics and Comminication Engineering, Thapar University, Patiala. hvohra@thapar.edu
Correspondence Author– Naveen Dewan, nandydan@gmail.com , Contact number (+918054121464)