The computation overhead is of major concern when
going for increased accuracy in online power system security
assessment (OPSSA). This paper proposes a scalable solution
technique based on distributed computing architecture to
mitigate the problem. A variant of the master/slave pattern is
used for deploying the cluster of workstations (COW), which
act as the computational engine for the OPSSA. Owing to the
inherent parallel structure in security analysis algorithm, to
exploit the potential of distributed computing, domain
decomposition is adopted instead of functional decomposition.
The security assessment is performed utilizing the developed
composite security index that can accurately differentiate the
secure and non-secure cases and has been defined as a function
of bus voltage and line flow limit violations. Validity of
proposed architecture is demonstrated by the results obtained
from an intensive experimentation using the benchmark IEEE
57 bus test system. The proposed framework, which is scalable,
can be further extended to intelligent monitoring and control
of power system
Cluster Computing Environment for On - line Static Security Assessment of lar...IDES Editor
The increased size of modern power systems
demand faster and accurate means for the security assessment,
so that the decisions for reliable and secure operation planning
could be drawn in a systematic manner. Large computational
overhead is the major impediment in preventing the power
system security assessment (PSSA) from on-line use. To
mitigate this problem, this paper proposes, a cluster computing
based architecture for power system static security assessment,
utilizing the tools in the open source domain. A variant of the
master/slave pattern is used for deploying the cluster of
workstations (COW), which act as the computational engine
for the on-line PSSA. The security assessment is performed
utilizing the developed composite security index that can
accurately differentiate the secure and non-secure cases and
has been defined as a function of bus voltage and line flow
limit violations. Due to the inherent parallel structure of
security assessment algorithm and to exploit the potential of
distributed computing, domain decomposition is employed for
parallelizing the sequential algorithm. Extensive
experimentations were carried out on IEEE 57 bus and IEEE
145-bus 50 machine standard test systems for demonstrating
the validity of the proposed architecture.
23 9754 assessment paper id 0023 (ed l)2IAESIJEECS
This paper presents a risk assessment method for assessing the cyber security of power systems in view of the role of protection systems. This paper examines the collision of transmission and bus line protection systems positioned in substations on the cyber-physical performance of the power systems. The projected method simulates the physical feedback of power systems to hateful attacks on protection system settings and parameters. The relationship between protection device settings, protection logic, and circuit breaker logic is analyzed. The expected load reduction (ELC) indicator is used in this paper to determine potential losses in the system due to cyber attacks. The Monte Carlo simulation is used to calculate ELC’s account to assess the capabilities of the attackers and bus arrangements are changed. The influence of the projected risk assessment method is illustrated by the use of the 9-bus system and the IEEE-68 bus system.
Cluster Computing Environment for On - line Static Security Assessment of lar...IDES Editor
The increased size of modern power systems
demand faster and accurate means for the security assessment,
so that the decisions for reliable and secure operation planning
could be drawn in a systematic manner. Large computational
overhead is the major impediment in preventing the power
system security assessment (PSSA) from on-line use. To
mitigate this problem, this paper proposes, a cluster computing
based architecture for power system static security assessment,
utilizing the tools in the open source domain. A variant of the
master/slave pattern is used for deploying the cluster of
workstations (COW), which act as the computational engine
for the on-line PSSA. The security assessment is performed
utilizing the developed composite security index that can
accurately differentiate the secure and non-secure cases and
has been defined as a function of bus voltage and line flow
limit violations. Due to the inherent parallel structure of
security assessment algorithm and to exploit the potential of
distributed computing, domain decomposition is employed for
parallelizing the sequential algorithm. Extensive
experimentations were carried out on IEEE 57 bus and IEEE
145-bus 50 machine standard test systems for demonstrating
the validity of the proposed architecture.
23 9754 assessment paper id 0023 (ed l)2IAESIJEECS
This paper presents a risk assessment method for assessing the cyber security of power systems in view of the role of protection systems. This paper examines the collision of transmission and bus line protection systems positioned in substations on the cyber-physical performance of the power systems. The projected method simulates the physical feedback of power systems to hateful attacks on protection system settings and parameters. The relationship between protection device settings, protection logic, and circuit breaker logic is analyzed. The expected load reduction (ELC) indicator is used in this paper to determine potential losses in the system due to cyber attacks. The Monte Carlo simulation is used to calculate ELC’s account to assess the capabilities of the attackers and bus arrangements are changed. The influence of the projected risk assessment method is illustrated by the use of the 9-bus system and the IEEE-68 bus system.
An intrusion detection algorithm for amiIJCI JOURNAL
Nowadays, using the smart metering devices for energy users to manage a wide variety of subscribers,
reading devices for measuring, billing, disconnection and connection of subscribers’ connection
management is an important issue. The performance of these intelligent systems is based on information
transfer in the context of information technology, so reported data from network should be managed to
avoid the malicious activities that including the issues that could affect the quality of service the system. In
this paper for control of the reported data and to ensure the veracity of the obtained information, using
intrusion detection system is proposed based on the support vector machine and principle component
analysis (PCA) to recognize and identify the intrusions and attacks in the smart grid. Here, the operation of
intrusion detection systems for different kernel of SVM when using support vector machine (SVM) and PCA
simultaneously is studied. To evaluate the algorithm, based on data KDD99, numerical simulation is done
on five different kernels for an intrusion detection system using support vector machine with PCA
simultaneously. Also comparison analysis is investigated for presented intrusion detection algorithm in
terms of time - response, rate of increase network efficiency and increase system error and differences in
the use or lack of use PCA. The results indicate that correct detection rate and the rate of attack error
detection have best value when PCA is used, and when the core of algorithm is radial type, in SVM
algorithm reduces the time for data analysis and enhances performance of intrusion detection.
An Investigation of Fault Tolerance Techniques in Cloud Computingijtsrd
Cloud computing which is created on Internet has the most powerful architecture of computation that provides users with the capabilities of information technology as a service and allows them to have access to these services without having specialized information or controlling the infrastructure. Fault tolerance has. The main advantages of using fault tolerance that has all the necessary techniques to keep active power and reliability in cloud computing include failure recovery, lower costs, and improved performance criteria. In this paper, we will investigation of the different techniques that are used for fault tolerance on cloud computing. Ya Min | Khin Myat Nwe Win | Aye Mya Sandar "An Investigation of Fault Tolerance Techniques in Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26611.pdfPaper URL: https://www.ijtsrd.com/computer-science/distributed-computing/26611/an-investigation-of-fault-tolerance-techniques-in-cloud-computing/ya-min
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Self-checking method for fault tolerance solution in wireless sensor network IJECEIAES
Recently, the wireless sensor network (WSN) has been considered in different application, particularly in emergency systems. Therefore, it is important to keep these networks in high reliability using software engineering techniques in the field of fault tolerance. This paper proposed a fault node detection method in WSN using the self-checking technique according to the rules of software engineering. Then, the detected faulted node is covered employing the reading of nearest neighbor nodes (sensors). In addition, the proposed method sends a message for maintenance to solve the fault. The proposed method can reduce the time between the detection and recovery of a fault to prevent the confusion of adopting wrong readings, in which the detection is making with mistake. Moreover, it guarantees the reliability of the WSN, in terms of operation and data transmission. The proposed method has been tested over different scenarios and the obtained results show the superior efficiency in terms of recovery, reliability, and continuous data transmission.
Controller selection in software defined networks using best-worst multi-crit...journalBEEI
Controllers are the key component of Software-defined Network (SDN) architecture. Given the diversity of open SDN controllers, the following question arises for the network administrators: How can we choose the appropriate SDN controller? Different characteristics of the controllers have greatly increased the complexity of the right decision. Multi-Criteria Decision-making Methods (MCDMs) is a family of robust mathematical tools to address complex problems regarding multiple objectives. In this paper, we study the most important features of SDN controllers. To this end, we compare the well-known SDN controllers including NOX, POX, Beacon, Floodlight, Ryu, ODL, and ONOS. Leveraging a novel MCDM technique called the Best–Worst Multi-criteria (BWM), we find the most appropriate SDN controller. We solve an optimization problem and evaluate its performance in terms of significant criteria such as throughput and latency. Initial evaluation revealed that the ONOS and ODL have the highest throughput, while the lowest throughputs belong to the NOX, POX, and Ryu. However, final evaluation concerning all criteria confirmed the robustness of the ONOS and the ODL compared to other controllers.
JPJ1439 On False Data-Injection Attacks against Power System State Estimation...chennaijp
We are good IEEE java projects development center in Chennai and Pondicherry. We guided advanced java technologies projects of cloud computing, data mining, Secure Computing, Networking, Parallel & Distributed Systems, Mobile Computing and Service Computing (Web Service).
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/java-projects/
Complex Measurement Systems in Medicine: from Synchronized Monotask Measuring...ITIIIndustries
Design problems of flexible computer systems for physiological researches are discussed. The widespread case of employing of commercial medical devices as parts of the resulting computer system is analyzed. To overcome most of the arising difficulties, we propose using of the universal synchronizing device and the modular script-based software. The prospects of such computer systems are outlined as an evolution of them into cyber-physical systems with on-demand plugging in of required hardware modules.
Metric for Evaluating Availability of an Information System : A Quantitative ...IJNSA Journal
The purpose of the paper is to present a metric for availability based on the design of the information
system. The availability metric proposed in this paper is twofold, based on the operating program and
network delay metric of the information system (For the local bound component composition the
availability metric is purely based on the software/operating program, for the remote bound component
composition the metric incorporates the delay metric of the network). The aim of the paper is to present a
quantitative availability metric derived from the component composition of an Information System, based
on the dependencies among the individual measurable components of the system. The metric is used for
measuring and evaluating availability of an information system from the security perspective, the
measurements may be done during the design phase or may also be done after the system is fully
functional. The work in the paper provides a platform for further research regarding the quantitative
security metric (based on the components of an information system i.e. user, hardware, operating
program and the network.) for an information system that addresses all the attributes of information and
network security.
Structural Health Monitoring by Payload Compression in Wireless Sensors Netwo...Dr. Amarjeet Singh
Structural health monitoring is the fact of
estimating the state of structural healthor detecting the
changes in structure that affect its performance. The
traditional approach to monitor the structural health is by
using centralized data acquisition hub wired to tens or even
hundreds of sensors, and the installation and maintenance of
these cabled systems represent significant concerns,
prompting the move toward wireless sensor network. As cost
effectiveness and energy efficiency is a major concern, our
main interest is to reduce the amount of overhead while
keeping the structural health monitoring accurate. Since most
of the compression algorithm is heavy weight for wireless
sensor network with respect to payload compression, here we
have analyzed an algorithmic comparison of arithmetic
coding algorithm and Huffman coding algorithm. Evaluation
shows that arithmetic coding is more efficient than Huffman
coding for payload compression.
Relevance Vector Machines for Earthquake Response Spectra drboon
This study uses Relevance Vector Machine (RVM) regression to develop a probabilistic model for the average horizontal component of 5%-damped earthquake response spectra. Unlike conventional models, the proposed approach does not require a functional form, and constructs the model based on a set predictive variables and a set of representative ground motion records. The RVM uses Bayesian inference to determine the confidence intervals, instead of estimating them from the mean squared errors on the training set. An example application using three predictive variables (magnitude, distance and fault mechanism) is presented for sites with shear wave velocities ranging from 450 m/s to 900 m/s. The predictions from the proposed model are compared to an existing parametric model. The results demonstrate the validity of the proposed model, and suggest that it can be used as an alternative to the conventional ground motion models. Future studies will investigate the effect of additional predictive variables on the predictive performance of the model.
An intrusion detection algorithm for amiIJCI JOURNAL
Nowadays, using the smart metering devices for energy users to manage a wide variety of subscribers,
reading devices for measuring, billing, disconnection and connection of subscribers’ connection
management is an important issue. The performance of these intelligent systems is based on information
transfer in the context of information technology, so reported data from network should be managed to
avoid the malicious activities that including the issues that could affect the quality of service the system. In
this paper for control of the reported data and to ensure the veracity of the obtained information, using
intrusion detection system is proposed based on the support vector machine and principle component
analysis (PCA) to recognize and identify the intrusions and attacks in the smart grid. Here, the operation of
intrusion detection systems for different kernel of SVM when using support vector machine (SVM) and PCA
simultaneously is studied. To evaluate the algorithm, based on data KDD99, numerical simulation is done
on five different kernels for an intrusion detection system using support vector machine with PCA
simultaneously. Also comparison analysis is investigated for presented intrusion detection algorithm in
terms of time - response, rate of increase network efficiency and increase system error and differences in
the use or lack of use PCA. The results indicate that correct detection rate and the rate of attack error
detection have best value when PCA is used, and when the core of algorithm is radial type, in SVM
algorithm reduces the time for data analysis and enhances performance of intrusion detection.
An Investigation of Fault Tolerance Techniques in Cloud Computingijtsrd
Cloud computing which is created on Internet has the most powerful architecture of computation that provides users with the capabilities of information technology as a service and allows them to have access to these services without having specialized information or controlling the infrastructure. Fault tolerance has. The main advantages of using fault tolerance that has all the necessary techniques to keep active power and reliability in cloud computing include failure recovery, lower costs, and improved performance criteria. In this paper, we will investigation of the different techniques that are used for fault tolerance on cloud computing. Ya Min | Khin Myat Nwe Win | Aye Mya Sandar "An Investigation of Fault Tolerance Techniques in Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26611.pdfPaper URL: https://www.ijtsrd.com/computer-science/distributed-computing/26611/an-investigation-of-fault-tolerance-techniques-in-cloud-computing/ya-min
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Self-checking method for fault tolerance solution in wireless sensor network IJECEIAES
Recently, the wireless sensor network (WSN) has been considered in different application, particularly in emergency systems. Therefore, it is important to keep these networks in high reliability using software engineering techniques in the field of fault tolerance. This paper proposed a fault node detection method in WSN using the self-checking technique according to the rules of software engineering. Then, the detected faulted node is covered employing the reading of nearest neighbor nodes (sensors). In addition, the proposed method sends a message for maintenance to solve the fault. The proposed method can reduce the time between the detection and recovery of a fault to prevent the confusion of adopting wrong readings, in which the detection is making with mistake. Moreover, it guarantees the reliability of the WSN, in terms of operation and data transmission. The proposed method has been tested over different scenarios and the obtained results show the superior efficiency in terms of recovery, reliability, and continuous data transmission.
Controller selection in software defined networks using best-worst multi-crit...journalBEEI
Controllers are the key component of Software-defined Network (SDN) architecture. Given the diversity of open SDN controllers, the following question arises for the network administrators: How can we choose the appropriate SDN controller? Different characteristics of the controllers have greatly increased the complexity of the right decision. Multi-Criteria Decision-making Methods (MCDMs) is a family of robust mathematical tools to address complex problems regarding multiple objectives. In this paper, we study the most important features of SDN controllers. To this end, we compare the well-known SDN controllers including NOX, POX, Beacon, Floodlight, Ryu, ODL, and ONOS. Leveraging a novel MCDM technique called the Best–Worst Multi-criteria (BWM), we find the most appropriate SDN controller. We solve an optimization problem and evaluate its performance in terms of significant criteria such as throughput and latency. Initial evaluation revealed that the ONOS and ODL have the highest throughput, while the lowest throughputs belong to the NOX, POX, and Ryu. However, final evaluation concerning all criteria confirmed the robustness of the ONOS and the ODL compared to other controllers.
JPJ1439 On False Data-Injection Attacks against Power System State Estimation...chennaijp
We are good IEEE java projects development center in Chennai and Pondicherry. We guided advanced java technologies projects of cloud computing, data mining, Secure Computing, Networking, Parallel & Distributed Systems, Mobile Computing and Service Computing (Web Service).
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/java-projects/
Complex Measurement Systems in Medicine: from Synchronized Monotask Measuring...ITIIIndustries
Design problems of flexible computer systems for physiological researches are discussed. The widespread case of employing of commercial medical devices as parts of the resulting computer system is analyzed. To overcome most of the arising difficulties, we propose using of the universal synchronizing device and the modular script-based software. The prospects of such computer systems are outlined as an evolution of them into cyber-physical systems with on-demand plugging in of required hardware modules.
Metric for Evaluating Availability of an Information System : A Quantitative ...IJNSA Journal
The purpose of the paper is to present a metric for availability based on the design of the information
system. The availability metric proposed in this paper is twofold, based on the operating program and
network delay metric of the information system (For the local bound component composition the
availability metric is purely based on the software/operating program, for the remote bound component
composition the metric incorporates the delay metric of the network). The aim of the paper is to present a
quantitative availability metric derived from the component composition of an Information System, based
on the dependencies among the individual measurable components of the system. The metric is used for
measuring and evaluating availability of an information system from the security perspective, the
measurements may be done during the design phase or may also be done after the system is fully
functional. The work in the paper provides a platform for further research regarding the quantitative
security metric (based on the components of an information system i.e. user, hardware, operating
program and the network.) for an information system that addresses all the attributes of information and
network security.
Structural Health Monitoring by Payload Compression in Wireless Sensors Netwo...Dr. Amarjeet Singh
Structural health monitoring is the fact of
estimating the state of structural healthor detecting the
changes in structure that affect its performance. The
traditional approach to monitor the structural health is by
using centralized data acquisition hub wired to tens or even
hundreds of sensors, and the installation and maintenance of
these cabled systems represent significant concerns,
prompting the move toward wireless sensor network. As cost
effectiveness and energy efficiency is a major concern, our
main interest is to reduce the amount of overhead while
keeping the structural health monitoring accurate. Since most
of the compression algorithm is heavy weight for wireless
sensor network with respect to payload compression, here we
have analyzed an algorithmic comparison of arithmetic
coding algorithm and Huffman coding algorithm. Evaluation
shows that arithmetic coding is more efficient than Huffman
coding for payload compression.
Relevance Vector Machines for Earthquake Response Spectra drboon
This study uses Relevance Vector Machine (RVM) regression to develop a probabilistic model for the average horizontal component of 5%-damped earthquake response spectra. Unlike conventional models, the proposed approach does not require a functional form, and constructs the model based on a set predictive variables and a set of representative ground motion records. The RVM uses Bayesian inference to determine the confidence intervals, instead of estimating them from the mean squared errors on the training set. An example application using three predictive variables (magnitude, distance and fault mechanism) is presented for sites with shear wave velocities ranging from 450 m/s to 900 m/s. The predictions from the proposed model are compared to an existing parametric model. The results demonstrate the validity of the proposed model, and suggest that it can be used as an alternative to the conventional ground motion models. Future studies will investigate the effect of additional predictive variables on the predictive performance of the model.
In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the output, making it a non-probabilistic binary linear classifier.
Study on the performance indicators for smart grids: a comprehensive reviewTELKOMNIKA JOURNAL
This paper presents a detailed review on performance indicators for smart grid (SG) such as voltage stability enhancement, reliability evaluation, vulnerability assessment, Supervisory Control and Data Acquisition (SCADA) and communication systems. Smart grids reliability assessment can be performed by analytically or by simulation. Analytical method utilizes the load point assessment techniques, whereas the simulation technique uses the Monte Carlo simulation (MCS) technique. The reliability index evaluations will consider the presence or absence of energy storage elements using the simulation technologies such as MCS, and the analytical methods such as systems average interruption frequency index (SAIFI), and other load point indices. This paper also presents the difference between SCADA and substation automation, and the fact that substation automation, though it uses the basic concepts of SCADA, is far more advanced in nature.
Risk assessment of power system transient instability incorporating renewabl...IJECEIAES
Transient stability affected by renewable energy sources integration due to reductions of system inertia and uncertainties associated with the expected generation. The ability to manage relation between the available big data and transient stability assessment (TSA) enables fast and accurate monitoring of TSA to prepare the required actions for secure operation. This work aims to build a predictive model using Gaussian process regression for online TSA utilizing selected features. The critical fault clearing time (CCT) is used as TSA index. The selected features map the system dynamics to reduce the burden of data collection and the computation time. The required data were collected offline from power flow calculations at different operating conditions. Therefore, CCT was calculated using electromagnetic transient simulation at each operating point by applying self-clearance three phase short circuit at prespecified locations. The features selection was implemented using the neighborhood component analysis, the Minimum Redundancy Maximum Relevance algorithm, and K-means clustering algorithm. The vulnerability of selected features tends to result great variation on the best features from the three methods. Hybrid collection of the best common features was used to enhance the TSA by refining the final selected features. The proposed model was investigated over 66-bus system.
This research presents a method for reliability assessment considering the 23MVA, 230/15 kV
transformer through two 15 kV outgoing transmission lines at Debre Markos substation. It also goes further to
include 139 low voltage 15/0.4 kV distribution transformers. The total load connected to the 15 kV feeders are
varies between 0.33255 and 6.3185 MW. A composite system adequacy and security assessment is done using
Monte Carlo simulation. The basic data and the topology used in the analysis are based on the Institution of
Electrical and Electronics Engineers - Reliability Test System and distribution system for bus two of the IEEEReliability
Bus bar Test System. The reliability indices SAIDI, SAIFI, CAIDI, EENS, AENS, ASAI, ASUI, and
expected interruption costs are being assessed and considered. Distribution system reliability information was
obtained from actual data for systems operating in Ethiopia Electric Utility office and Debre Markos substation
recorded data and online SCADA system.
Contingency plans based on N - 1 and N - 2 contingencies are already very much used by utilities . Artificial intelligent methods are new trends for analysing the contingency scenario along with state of art congestion management. This gives extra backup and b oost to reliable operation under contingent scenario of power system. This paper envisages the summary of all those efforts. This paper will help utilities to put more thinking in terms of recent developments in fast and intelligent computing methods. The paper highlights classical research and modern trends in contingency analysis such as hybrid artificial intelligent methods. Steady state stability assessment of a power system pursues a twofold objective:first to appraise the system's capability to withs tand major contingencies,and second to suggest remedial actions,i.e. means to enhance this capability,whenever needed. The first objective is the concern of analysis,the second is a matter of control.
In this system, we've got to implement within
attack in sub-network mistreatment camera. Whenever
the external person redirects into server that point server
can find so apprize to admin regarding within attack
.False information injection attacks from associate degree
individual’s purpose of read associate degreed displayed
what it takes for an adversary to launch a made attack
SYNCHROPHASOR DATA BASED INTELLIGENT ALGORITHM FOR REAL TIME EVENT DETECTION ...IAEME Publication
The wide area measurement system (WAMS) has been installed at several locations in power system. Phasor measurements units (PMU) are considered as the building blocks of WAMS are being installed at various locations of power system. PMU is sending very large volume of data to Power system control center with the sampling rate of 50 or 25 samples per second. However there are always several events per day occurring in the system but the rate at which data is received and the volume of data to be analyzed is a big challenge for power system engineer. There is a need for developing an intelligent system to handle large volume of Synchrophasor data and identify Power system event in the present context. This paper presents an intelligent algorithm to automatically detect such events using wide area measurements in real time. In this work, Synchrophasor measurements received from PMU are fed to KNN based pattern recognition algorithm which is used to identify the Power system events. The severity and the type of the event can be judged through the change in voltage magnitude and phase angle at various buses. The developed algorithm is tested for IEEE 14 bus system and results are verified.
Cyber-Defensive Architecture for Networked Industrial Control SystemsIJEACS
This paper deals with the inevitable consequence of the convenience and efficiency we benefit from the open, networked control system operation of safety-critical applications: vulnerability to such system from cyber-attacks. Even with numerous metrics and methods for intrusion detection and mitigation strategy, a complete detection and deterrence of internal code flaws and outside cyber-attacks has not been found and would not be found anytime soon. Considering the ever incompleteness of detection and prevention and the impact and consequence of mal-functions of the safety-critical operations caused by cyber incidents, this paper proposes a new computer control system architecture which assures resiliency even under compromised situations. The proposed architecture is centered on diversification of hardware systems and unidirectional communication from the proposed system in alerting suspicious activities to upper layers. This paper details the architectural structure of the proposed cyber defensive computer control system architecture for power substation applications and its validation in lab experimentation and on a cybersecurity testbed.
Advanced Automated Approach for Interconnected Power System Congestion ForecastPower System Operation
system operators need the solution that
will allow them to keep the electrical grid secure in
spite of frequent changes in network loadings. The
Day-ahead congestion forecast (DACF) is a part of
congestion management process that becomes more
and more important. This paper contains the
description of an approach to automate the DACF for
an interconnected power system network. Using the
existing industrial tools and workflows automation
system, the congestion forecast system runs in fully
automatic mode, significantly reducing need of
specialist resources in operational congestion
management process. The realisation of the advanced
automated approach allows a quick, efficient and cost
effective method for energy management that could be
easily adopted by transmission system operators.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Response time optimization for vulnerability management system by combining ...IJECEIAES
The growth of information and communication technology has made the internet network have many users. On the other side, this increases cybercrime and its risks. One of the main attack targets is network weakness. Therefore, cyber security is required, which first does a network scan to stop the attack. Points of vulnerability on the network can be discovered using scanning techniques. Furthermore, mitigation or recovery measures can be implemented. However, it needs a short response time and high accuracy while scanning to reduce the level of damage caused by cyber-attacks. In this paper, the proposed method improves the performance of a vulnerability management system based on network and port scanning by combining the benchmarking and scenario planning models. On a network scanning to discover open ports on a subnet, Masscan can achieve response times of less than 2 seconds, and on scenario planning for detection on a single host by Nmap can reach less than 4 seconds. It was combining both models obtained an adequate optimization response time. The total response time is less than 6 seconds.
Probabilistic Performance Index based Contingency Screening for Composite Pow...IJECEIAES
Composite power system reliability involves assessing the adequacy of generation and transmission system to meet the demand at major system load points. Contingency selection was being the most tedious step in reliability evaluation of large electric systems. Contingency in power system might be a possible event in future which was not predicted with certainty in earlier research. Therefore, uncertainty may be inevitable in power system operation. Deterministic indices may not guarantee the randomness in reliability assessment. In order to account for volatility in contingencies, a new performance index proposed in the current research. Proposed method assimilates the uncertainty in computational procedure. Reliability test systems like Roy Billinton Test System-6 bus system and IEEE-24 bus reliability test systems were used to test the effectiveness of a proposed method.
Similar to On-line Power System Static Security Assessment in a Distributed Computing Frame Work (20)
Now-a-days, Internet has become an important part of human’s life, a person
can shop, invest, and perform all the banking task online. Almost, all the organizations have
their own website, where customer can perform all the task like shopping, they only have to
provide their credit card details. Online banking and e-commerce organizations have been
experiencing the increase in credit card transaction and other modes of on-line transaction.
Due to this credit card fraud becomes a very popular issue for credit card industry, it causes
many financial losses for customer and also for the organization. Many techniques like
Decision Tree, Neural Networks, Genetic Algorithm based on modern techniques like
Artificial Intelligence, Machine Learning, and Fuzzy Logic have been already developed for
credit card fraud detection. In this paper, an evolutionary Simulated Annealing algorithm is
used to train the Neural Networks for Credit Card fraud detection in real-time scenario.
This paper shows how this technique can be used for credit card fraud detection and
present all the detailed experimental results found when using this technique on real world
financial data (data are taken from UCI repository) to show the effectiveness of this
technique. The algorithm used in this paper are likely beneficial for the organizations and
for individual users in terms of cost and time efficiency. Still there are many cases which are
misclassified i.e. A genuine customer is classified as fraud customer or vise-versa.
Wireless sensor networks (WSN) have been widely used in various applications.
In these networks nodes collect data from the attached sensors and send their data to a base
station. However, nodes in WSN have limited power supply in form of battery so the nodes
are expected to minimize energy consumption in order to maximize the lifetime of WSN. A
number of techniques have been proposed in the literature to reduce the energy
consumption significantly. In this paper, we propose a new clustering based technique
which is a modification of the popular LEACH algorithm. In this technique, first cluster
heads are elected using the improved LEACH algorithm as usual, and then a cluster of
nodes is formed based on the distance between node and cluster head. Finally, data from
node is transferred to cluster head. Cluster heads forward data, after applying aggregation,
to the cluster head that is closer to it than sink in forward direction or directly to the sink.
This reduction in distance travelled improves the performance over LEACH algorithm
significantly.
The next generation wireless networks comprises of mobile users moving
between heterogeneous networks, using terminals with multiple access interfaces and
services. The most important issue in such environment is ABC (Always Best Connected) i.e.
allowing the best connectivity to applications anywhere at any time. For always best
connectivity requirement various vertical handover strategies for decision making have
been proposed. This paper provides an overview of the most interesting and recent
strategies.
This paper presents the design and performance comparison of a two stage
operational amplifier topology using CMOS and BiCMOS technology. This conventional op
amp circuit was designed by using RF model of BSIM3V3 in 0.6 μm CMOS technology and
0.35 μm BiCMOS technology. Both the op amp circuits were designed and simulated,
analyzed and performance parameters are compared. The performance parameters such as
gain, phase margin, CMRR, PSRR, power consumption etc achieved are compared. Finally,
we conclude the suitability of CMOS technology over BiCMOS technology for low power
RF design.
In Cognitive Radio Networks (CRN), Cooperative Spectrum Sensing (CSS) is
used to improve performance of spectrum sensing techniques used for detection of licensed
(Primary) user’s signal. In CSS, the spectrum sensing information from multiple unlicensed
(Secondary) users are combined to take final decision about presence of primary signal. The
mixing techniques used to generate final decision about presence of PU’s signal are also
called as Fusion techniques / rules. The fusion techniques are further classified as data
fusion and decision fusion techniques. In data fusion technique all the secondary users
(SUs) share their raw information of spectrum detection like detected energy or other
statistical information, while in decision fusion technique all the SUs take their local
decisions and share the decision by sending ‘0’ or ‘1’ corresponding to absence and presence
of PU’s signal respectively. The rules used in decision fusion techniques are OR rule, AND
rule and K-out-of-N rule. The CSS is further classified as distributed CSS and centralized
CSS. In distributed CSS all the SUs share the spectrum detection information with each
other and by mixing the shared information; all the SUs take final decision individually. In
centralized CSS all the SUs send their detected information to a secondary base station /
central unit which combines the shared information and takes final decision. The secondary
base station shares the final decision with all the SUs in the CRN. This paper covers
overview of information fusion methods used for CSS and analysis of decision fusion rules
with simulation results.
ZigBee has been developed to support lower data rates and low power consuming
applications. This paper targets to analyze various parameters of ZigBee physical (PHY).
Performance of ZigBee PHY is evaluated on the basis of energy consumption in
transmitting and receiving mode and throughput. Effect of variation in network size is
studied on these performance attributes. Some modulation schemes are also compared and
the best modulation scheme is suggested with tradeoffs between different performance
metrics.
This paper gives a brief idea of the moving objects tracking and its application.
In sport it is challenging to track and detect motion of players in video frames. Task
represents optical flow analysis to do motion detection and particle filter to track players
and taking consideration of regions with movement of players in sports video. Optical flow
vector calculation gives motion of players in video frame. This paper presents improved
Luacs Kanade algorithm explained for optical flow computation for large displacement and
more accuracy in motion estimation.
A rapid progress is seen in the field of robotics both in educational and industrial
automation sectors. The Robotics education in particular is gaining technological advances
and providing more learning opportunities. In automotive sector, there is a necessity and
demand to automate daily human activities by robot. With such an advancement and
demand for robotics, the realization of a popular computer game will help students to learn
and acquire skills in the field of robotics. The computer game such as Pacman offers
challenges on both software and hardware fronts. In software, it provides challenges in
developing algorithms for a robot to escape from the pool of attacking robots and to develop
algorithms for multiple ghost robots to attack the Pacman. On the hardware front, it
provides a challenge to integrate various systems to realize the game. This project aims to
demonstrate the pacman game in real world as well as in simulation. For simulation
purpose Player/Stage is used to develop single-client and multi-client architectures. The
multi- client architecture in player/stage uses one global simulation proxy to which all the
robot models are connected. This reduces the overhead to manage multiple robots proxy.
The single-client architecture enables only two robot models to connect to the simulation
proxy. Multi-client approach offers flexibility to add sensors to each port which will be used
distinctly by the client attached to the respective robot. The robots are named as Pacman
and Ghosts, which try to escape and attack respectively. Use of Network Camera has been
done to detect the global positions of the robots and data is shared through inter-process
communication.
In Content-Based Image Retrieval (CBIR) systems, the visual contents of the
images in the database are took out and represented by multi-dimensional characteristic
vectors. A well known CBIR system that retrieves images by unsupervised method known
as cluster based image retrieval system. For enhancing the performance and retrieval rate
of CBIR system, we fuse the visual contents of an image. Recently, we developed two
cluster-based CBIR systems by fusing the scores of two visual contents of an image. In this
paper, we analyzed the performance of the two recommended CBIR systems at different
levels of precision using images of varying sizes and resolutions. We also compared the
performance of the recommended systems with that of the other two existing CBIR systems
namely UFM and CLUE. Experimentally, we find that the recommended systems
outperform the other two existing systems and one recommended system also comparatively
performed better in every resolution of image.
Information Systems and Networks are subjected to electronic attacks. When
network attacks hit, organizations are thrown into crisis mode. From the IT department to
call centers, to the board room and beyond, all are fraught with danger until the situation is
under control. Traditional methods which are used to overcome these threats (e.g. firewall,
antivirus software, password protection etc.) do not provide complete security to the system.
This encourages the researchers to develop an Intrusion Detection System which is capable
of detecting and responding to such events. This review paper presents a comprehensive
study of Genetic Algorithm (GA) based Intrusion Detection System (IDS). It provides a
brief overview of rule-based IDS, elaborates the implementation issues of Genetic Algorithm
and also presents a comparative analysis of existing studies.
Step by step operations by which we make a group of objects in which attributes
of all the objects are nearly similar, known as clustering. So, a cluster is a collection of
objects that acquire nearly same attribute values. The property of an object in a cluster is
similar to other objects in same cluster but different with objects of other clusters.
Clustering is used in wide range of applications like pattern recognition, image processing,
data analysis, machine learning etc. Nowadays, more attention has been put on categorical
data rather than numerical data. Where, the range of numerical attributes organizes in a
class like small, medium, high, and so on. There is wide range of algorithm that used to
make clusters of given categorical data. Our approach is to enhance the working on well-
known clustering algorithm k-modes to improve accuracy of algorithm. We proposed a new
approach named “High Accuracy Clustering Algorithm for Categorical datasets”.
Brain tumor is a malformed growth of cells within brain which may be
cancerous or non-cancerous. The term ‘malformed’ indicates the existence of tumor. The
tumor may be benign or malignant and it needs medical support for further classification.
Brain tumor must be detected, diagnosed and evaluated in earliest stage. The medical
problems become grave if tumor is detected at the later stage. Out of various technologies
available for diagnosis of brain tumor, MRI is the preferred technology which enables the
diagnosis and evaluation of brain tumor. The current work presents various clustering
techniques that are employed to detect brain tumor. The classification involves classification
of images into normal and malformed (if detected the tumor). The algorithm deals with
steps such as preprocessing, segmentation, feature extraction and classification of MR brain
images. Finally, the confirmatory step is specifying the tumor area by technique called
region of interest.
A Proxy signature scheme enables a proxy signer to sign a message on behalf of
the original signer. In this paper, we propose ECDLP based solution for chen et. al [1]
scheme. We describe efficient and secure Proxy multi signature scheme that satisfy all the
proxy requirements and require only elliptic curve multiplication and elliptic curve addition
which needs less computation overhead compared to modular exponentiations also our
scheme is withstand against original signer forgery and public key substitution attack.
Water marking has been proposed as a method to enhance data security. Text
water marking requires extreme care when embedding additional data within the images
because the additional information must not affect the image quality. Digital water marking
is a method through which we can authenticate images, videos and even texts. Add text
water mark and image water mark to your photos or animated image, protect your
copyright avoid unauthorized use. Water marking functions are not only authentication, but
also protection for such documents against malicious intentions to change such documents
or even claim the rights of such documents. Water marking scheme that hides water
marking in method, not affect the image quality. In this paper method of hiding a data using
LSB replacement technique is proposed.
Today among various medium of data transmission or storage our sensitive data
are not secured with a third-party, that we used to take help of. Cryptography plays an
important role in securing our data from malicious attack. This paper present a partial
image encryption based on bit-planes permutation using Peter De Jong chaotic map for
secure image transmission and storage. The proposed partial image encryption is a raw data
encryption method where bits of some bit-planes are shuffled among other bit-planes based
on chaotic maps proposed by Peter De Jong. By using the chaotic behavior of the Peter De
Jong map the position of all the bit-planes are permuted. The result of the several
experimental, correlation analysis and sensitivity test shows that the proposed image
encryption scheme provides an efficient and secure way for real-time image encryption and
decryption.
This paper presents a survey of Dependency Analysis of Service Oriented
Architecture (SOA) based systems. SOA presents newer aspects of dependency analysis due
to its different architectural style and programming paradigm. This paper surveys the
previous work taken on dependency analysis of service oriented systems. This study shows
the strengths and weaknesses of current approaches and tools available for dependency
analysis task in context of SOA. The main motivation of this work is to summarize the
recent approaches in this field of research, identify major issue and challenges in
dependency analysis of SOA based systems and motivate further research on this topic.
In this paper, proposed a novel implementation of a Soft-Core system using
micro-blaze processor with virtex-5 FPGA. Till now Hard-Core processors are used in
FPGA processor cores. Hard cores are a fixed gate-level IP functions within the FPGA
fabrics. Now the proposed processor is Soft-Core Processor, this is a microprocessor fully
described in software, usually in an HDL. This can be implemented by using EDK tool. In
this paper, developed a system which is having a micro-blaze processor is the combination
of both hardware & Software. By using this system, user can control and communicate all
the peripherals which are in the supported board by using Xilinx platform to develop an
embedded system. Implementing of Soft-Core process system with different peripherals like
UART interface, SPA flash interface, SRAM interface has to be designed using Xilinx
Embedded Development Kit (EDK) tools.
The article presents a simple algorithm to construct minimum spanning tree and
to find shortest path between pair of vertices in a graph. Our illustration includes the proof
of termination. The complexity analysis and simulation results have also been included.
Wimax technology has reshaped the framework of broadband wireless internet
service. It provides the internet service to unconnected or detached areas such as east South
Africa, rural areas of America and Asia region. Full duplex helpers employed with one of
the relay stations selection and indexing method that is Randomized Distributed Space Time
are used to expand the coverage area of primary Wimax station. The basic problem was
identified at cell edge due to weather conditions (rain, fog), insertion of destruction because
of multiple paths in the same communication channel and due to interference created by
other users in that communication. It is impractical task for the receiver station to decode
the transmitted signal successfully at the cell edges, which increases the high packet loss and
retransmissions. But Wimax is a outstanding technology which is used for improving the
quality of internet service and also it offers various services like Voice over Internet
Protocol, Video conferencing and Multimedia broadcast etc where a little delay in packet
transmission can cause a big loss in the communication. Even setup and initialization of
another Wimax station nearer to each other is not a good alternate, where any mobile
station can easily handover to another base station if it gets a strong signal from other one.
But in rural areas, for few numbers of customers, installation of base station nearer to each
other is costlier task. In this review article, we present a scheme using R-DSTC technique to
choose and select helpers (relay nodes) randomly to expand the coverage area and help to
mobile station as a helper to provide secure communication with base station. In this work,
we use full duplex helpers for better utilization of bandwidth.
Radio Frequency identification (RFID) technology has become emerging
technique for tracking and items identification. Depend upon the function; various RFID
technologies could be used. Drawback of passive RFID technology, associated to the range
of reading tags and assurance in difficult environmental condition, puts boundaries on
performance in the real life situation [1]. To improve the range of reading tags and
assurance, we consider implementing active backscattering tag technology. For making
mobiles of multiple radio standards in 4G network; the Software Defined Radio (SDR)
technology is used. Restrictions in Existing RFID technologies and SDR technology, can be
eliminated by the development and implementation of the Software Defined Radio (SDR)
active backscattering tag compatible with the EPC global UHF Class 1 Generation 2 (Gen2)
RFID standard. Such technology can be used for many of applications and services.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.