This document discusses classifying patterns under attacks and evaluating pattern security. It proposes a framework for assessing pattern security and modeling adversaries to characterize attack situations. The framework aims to provide a more comprehensive understanding of how classifiers behave under adversarial conditions. This can help lead to better design decisions that improve classifier security against considered attacks. Three applications are discussed - spam filtering, intrusion detection, and biometric verification - where pattern classifiers may be vulnerable if adversarial scenarios are not accounted for during design and evaluation.
In Quest of Benchmarking Security Risks to Cyber-Physical SystemsDETER-Project
DeterLab provides the capability to conduct risk evaluations of cyber-physical systems where the controllable variables range from IP level dynamics to introduction of malicious entities such as DDoS attacks. I recently co-authored an article published in the IEEE Magazine that discusses how the cyber aspects and the physical aspects of such systems can be integrated together to provide a CPS risk assessment environment.
In our article, titled "In the Quest of Benchmarking Security Risks to Cyber-Physical Systems" we present a generic yet practical framework for assessing security risks to cyber-physical systems. Our framework can be used to benchmark security risks when information is less than perfect, and interdependencies of physical and computational components may result in correlated failures. We focus on the risks that arise from interdependent reliability failures (faults) and security failures (attacks).
We advocate that a sound assessment of these risks requires explicit modeling of the effects of both technology-based defenses and institutions necessary for supporting them. Our game-theoretic approach to estimate security risks allows designing defenses that consider fault-tolerant control along with institutional structures.
Alefiya Hussain, University of Southern California
For information on DeterLab, visit: http://www.deter-project.org/deterlab-cyber-security-science-facility
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...Konstantinos Demertzis
The document discusses a proposed Next Generation Cognitive Computing Security Operations Center (NGC2SOC) that uses a novel intelligence driven cognitive computing framework called the λ-Architecture Network Flow Forensics Framework (λ-NF3). The λ-NF3 implements a Lambda machine learning architecture to analyze both batch and streaming network data using two computational intelligence algorithms - an Extreme Learning Machine neural network and a Self-Adjusting Memory k-Nearest Neighbors classifier. It aims to provide fully automated network traffic analysis, malware detection, and encrypted traffic identification for efficient defense against adversarial attacks without relying on human expertise.
GENERATING REPRESENTATIVE ATTACK TEST CASES FOR EVALUATING AND TESTING WIRELE...IJNSA Journal
Openness of wireless communication medium and flexibility in dealing with wireless communication protocols and their vulnerabilities create a problem of poor security. Due to deficiencies in the security mechanisms of the first line of defense such as firewall and encryption, there are growing interests in detecting wireless attacks through a second line of defense in the form of Wireless Intrusion Detection System (WIDS). WIDS monitors the radio spectrum and system activities and detects attacks leaked from the first line of defense. Selecting a reliable WIDS system depends significantly on its functionality and performance evaluation. Comprehensive and credible evaluation of WIDSs necessitates taking into
account all possible attacks. While this is operationally impossible, it is necessary to select representative
attack test cases that are extracted mainly from a comprehensive classification of wireless attacks. Dealing with this challenge, this paper proposes a holistic taxonomy of wireless security attacks from the perspective of the WIDS evaluator. This proposed taxonomy includes all relevant necessary and sufficient dimensions for wireless attacks classification and it helps in generating and extracting the representative attack test cases.
Malware Risk Analysis on the Campus Network with Bayesian Belief NetworkIJNSA Journal
This document discusses using a Bayesian Belief Network (BBN) to analyze malware risk on a university campus network. It begins by introducing the campus network monitoring tools and SIR epidemiological model used to model malware propagation. It then provides background on BBN principles, including defining nodes, conditional probabilities, and using the network to compute joint probabilities. The document proposes applying a BBN to assess malware prevalence risk by relating threat, vulnerability, and cost impact on network assets. It aims to provide understandable risk assessments to inform decision making.
Classification of Malware Attacks Using Machine Learning In Decision TreeCSCJournals
Predicting cyberattacks using machine learning has become imperative since cyberattacks have increased exponentially due to the stealthy and sophisticated nature of adversaries. To have situational awareness and achieve defence in depth, using machine learning for threat prediction has become a prerequisite for cyber threat intelligence gathering. Some approaches to mitigating malware attacks include the use of spam filters, firewalls, and IDS/IPS configurations to detect attacks. However, threat actors are deploying adversarial machine learning techniques to exploit vulnerabilities. This paper explores the viability of using machine learning methods to predict malware attacks and build a classifier to automatically detect and label an event as “Has Detection or No Detection”. The purpose is to predict the probability of malware penetration and the extent of manipulation on the network nodes for cyber threat intelligence. To demonstrate the applicability of our work, we use a decision tree (DT) algorithms to learn dataset for evaluation. The dataset was from Microsoft Malware threat prediction website Kaggle. We identify probably cyberattacks on smart grid, use attack scenarios to determine penetrations and manipulations. The results show that ML methods can be applied in smart grid cyber supply chain environment to detect cyberattacks and predict future trends.
The document discusses data security and controls in database management systems. It begins by introducing basic security concepts like secrecy, integrity, availability, security policy, and prevention vs detection approaches. It then describes access controls commonly found in current database systems, including different levels of granularity (e.g. entire database, specific relations or rows) and control modes (e.g. read, write, delete permissions). It also introduces the problem of multilevel security that traditional access controls cannot fully address.
Software security risk mitigation using object oriented design patternseSAT Journals
Abstract It is now well known that requirement and the design phase of software development lifecycle are the phases where security incorporation yields maximum benefits.In this paper, we have tried to tie security requirements, security features and security design patterns together in a single string. It is complete process that will help a designer to choose the most appropriate security design pattern depending on the security requirements. The process includes risk analysis methodology at the design phase of the software that is based on the common criteria requirement as it is a wellknown security standard that is generally used in the development of security requirements. Risk mitigation mechanisms are proposed in the form of security design patterns. Exhaustive list of most reliable and well proven security design patterns is prepared and their categorization is done on the basis of attributes like data sensitivity, sector, number of users etc. Identified patterns are divided into three levels of security. After the selection of security requirement, the software designer can calculate the percentage of security features contribution and on the basis of this percentage; design pattern level can be selected and applied.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
In Quest of Benchmarking Security Risks to Cyber-Physical SystemsDETER-Project
DeterLab provides the capability to conduct risk evaluations of cyber-physical systems where the controllable variables range from IP level dynamics to introduction of malicious entities such as DDoS attacks. I recently co-authored an article published in the IEEE Magazine that discusses how the cyber aspects and the physical aspects of such systems can be integrated together to provide a CPS risk assessment environment.
In our article, titled "In the Quest of Benchmarking Security Risks to Cyber-Physical Systems" we present a generic yet practical framework for assessing security risks to cyber-physical systems. Our framework can be used to benchmark security risks when information is less than perfect, and interdependencies of physical and computational components may result in correlated failures. We focus on the risks that arise from interdependent reliability failures (faults) and security failures (attacks).
We advocate that a sound assessment of these risks requires explicit modeling of the effects of both technology-based defenses and institutions necessary for supporting them. Our game-theoretic approach to estimate security risks allows designing defenses that consider fault-tolerant control along with institutional structures.
Alefiya Hussain, University of Southern California
For information on DeterLab, visit: http://www.deter-project.org/deterlab-cyber-security-science-facility
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...Konstantinos Demertzis
The document discusses a proposed Next Generation Cognitive Computing Security Operations Center (NGC2SOC) that uses a novel intelligence driven cognitive computing framework called the λ-Architecture Network Flow Forensics Framework (λ-NF3). The λ-NF3 implements a Lambda machine learning architecture to analyze both batch and streaming network data using two computational intelligence algorithms - an Extreme Learning Machine neural network and a Self-Adjusting Memory k-Nearest Neighbors classifier. It aims to provide fully automated network traffic analysis, malware detection, and encrypted traffic identification for efficient defense against adversarial attacks without relying on human expertise.
GENERATING REPRESENTATIVE ATTACK TEST CASES FOR EVALUATING AND TESTING WIRELE...IJNSA Journal
Openness of wireless communication medium and flexibility in dealing with wireless communication protocols and their vulnerabilities create a problem of poor security. Due to deficiencies in the security mechanisms of the first line of defense such as firewall and encryption, there are growing interests in detecting wireless attacks through a second line of defense in the form of Wireless Intrusion Detection System (WIDS). WIDS monitors the radio spectrum and system activities and detects attacks leaked from the first line of defense. Selecting a reliable WIDS system depends significantly on its functionality and performance evaluation. Comprehensive and credible evaluation of WIDSs necessitates taking into
account all possible attacks. While this is operationally impossible, it is necessary to select representative
attack test cases that are extracted mainly from a comprehensive classification of wireless attacks. Dealing with this challenge, this paper proposes a holistic taxonomy of wireless security attacks from the perspective of the WIDS evaluator. This proposed taxonomy includes all relevant necessary and sufficient dimensions for wireless attacks classification and it helps in generating and extracting the representative attack test cases.
Malware Risk Analysis on the Campus Network with Bayesian Belief NetworkIJNSA Journal
This document discusses using a Bayesian Belief Network (BBN) to analyze malware risk on a university campus network. It begins by introducing the campus network monitoring tools and SIR epidemiological model used to model malware propagation. It then provides background on BBN principles, including defining nodes, conditional probabilities, and using the network to compute joint probabilities. The document proposes applying a BBN to assess malware prevalence risk by relating threat, vulnerability, and cost impact on network assets. It aims to provide understandable risk assessments to inform decision making.
Classification of Malware Attacks Using Machine Learning In Decision TreeCSCJournals
Predicting cyberattacks using machine learning has become imperative since cyberattacks have increased exponentially due to the stealthy and sophisticated nature of adversaries. To have situational awareness and achieve defence in depth, using machine learning for threat prediction has become a prerequisite for cyber threat intelligence gathering. Some approaches to mitigating malware attacks include the use of spam filters, firewalls, and IDS/IPS configurations to detect attacks. However, threat actors are deploying adversarial machine learning techniques to exploit vulnerabilities. This paper explores the viability of using machine learning methods to predict malware attacks and build a classifier to automatically detect and label an event as “Has Detection or No Detection”. The purpose is to predict the probability of malware penetration and the extent of manipulation on the network nodes for cyber threat intelligence. To demonstrate the applicability of our work, we use a decision tree (DT) algorithms to learn dataset for evaluation. The dataset was from Microsoft Malware threat prediction website Kaggle. We identify probably cyberattacks on smart grid, use attack scenarios to determine penetrations and manipulations. The results show that ML methods can be applied in smart grid cyber supply chain environment to detect cyberattacks and predict future trends.
The document discusses data security and controls in database management systems. It begins by introducing basic security concepts like secrecy, integrity, availability, security policy, and prevention vs detection approaches. It then describes access controls commonly found in current database systems, including different levels of granularity (e.g. entire database, specific relations or rows) and control modes (e.g. read, write, delete permissions). It also introduces the problem of multilevel security that traditional access controls cannot fully address.
Software security risk mitigation using object oriented design patternseSAT Journals
Abstract It is now well known that requirement and the design phase of software development lifecycle are the phases where security incorporation yields maximum benefits.In this paper, we have tried to tie security requirements, security features and security design patterns together in a single string. It is complete process that will help a designer to choose the most appropriate security design pattern depending on the security requirements. The process includes risk analysis methodology at the design phase of the software that is based on the common criteria requirement as it is a wellknown security standard that is generally used in the development of security requirements. Risk mitigation mechanisms are proposed in the form of security design patterns. Exhaustive list of most reliable and well proven security design patterns is prepared and their categorization is done on the basis of attributes like data sensitivity, sector, number of users etc. Identified patterns are divided into three levels of security. After the selection of security requirement, the software designer can calculate the percentage of security features contribution and on the basis of this percentage; design pattern level can be selected and applied.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
MACHINE LEARNING IN NETWORK SECURITY USING KNIME ANALYTICSIJNSA Journal
Machine learning has more and more effect on our every day’s life. This field keeps growing and expanding into new areas. Machine learning is based on the implementation of artificial intelligence that gives systems the capability to automatically learn and enhance from experiments without being explicitly programmed. Machine Learning algorithms apply mathematical equations to analyze datasets and predict values based on the dataset. In the field of cybersecurity, machine learning algorithms can be utilized to train and analyze the Intrusion Detection Systems (IDSs) on security-related datasets. In this paper, we tested different machine learning algorithms to analyze NSL-KDD dataset using KNIME analytics.
Machine learning in network security using knime analyticsIJNSA Journal
Machine learning has more and more effect on our every day’s life. This field keeps growing and expanding into new areas. Machine learning is based on the implementation of artificial intelligence that gives systems the capability to automatically learn and enhance from experiments without being explicitly
programmed. Machine Learning algorithms apply mathematical equations to analyze datasets and predict values based on the dataset. In the field of cybersecurity, machine learning algorithms can be utilized to train and analyze the Intrusion Detection Systems (IDSs) on security-related datasets. In this paper, we tested different machine learning algorithms to analyze NSL-KDD dataset using KNIME analytics.
Adversarial Attacks and Defenses in Malware Classification: A SurveyCSCJournals
As malware continues to grow more sophisticated and more plentiful - traditional signature and heuristics-based defenses no longer cut it. Instead, the industry has recently turned to using machine learning for malicious file detection. The challenge with this approach is that machine learning itself comes with vulnerabilities - and if left unattended presents a new attack surface for attackers to exploit.
In this paper we present a survey of research in the area of machine learning-based malware classifiers, the attacks they encounter, and the defensive measures available. We start by reviewing recent advances in malware classification, including the most important works using deep learning. We then discuss in detail the field of adversarial machine learning and conduct an exhaustive review of adversarial attacks and defenses in the field of malware classification.
This document discusses the potential threat of a "Superworm", a theoretical worm that could incorporate successful propagation techniques from past worms to spread rapidly and cause widespread damage. It describes the features such a worm may have, including exploiting multiple vulnerabilities across many operating systems and using various proliferation methods. The document also examines a past university network security incident and two security technologies that could help detect and limit the spread of such a worm: an early worm detection system and a modified reverse proxy server.
A Behavior Based Intrusion Detection System Using Machine Learning AlgorithmsCSCJournals
Humans are consistently referred to as the weakest link in information security. Human factors such as individual differences, cognitive abilities and personality traits can impact on behavior and play a significant role in information security. The purpose of this study is to identify, describe and classify the human factors affecting Information Security and develop a model to reduce the risk of insider misuse and assess the use and performance of the best-suited artificial intelligence techniques in detection of misuse. More specifically, this study provides a comprehensive view of the human related information security risks and threats, classification study of the human related threats in information security, a methodology developed to reduce the risk of human related threats by detecting insider misuse by a behavior-based intrusion detection system using machine learning algorithms, and the comparison of the numerical experiments for analysis of this approach. Specifically, by using the machine learning algorithm with the best learning performance, the detection rates of the attack types defined in the organized five dimensional human threats taxonomy were determined. Lastly, the possible human factors affecting information security as linked to the detection rates were sorted upon the evaluation of the taxonomy.
Vulnerability Analysis of 802.11 Authentications and Encryption Protocols: CV...AM Publications
This paper analysis vulnerability of known attacks on WLAN cipher suite, authentication mechanisms and credentials using common vulnerability scoring system (CVSS).
A Survey on Hidden Markov Model (HMM) Based Intention Prediction TechniquesIJERA Editor
This document summarizes a research paper on using hidden Markov models to predict security threats and attacks in cloud computing systems. It discusses two approaches: 1) Integrating ongoing attack detection, automatic prevention actions, and risk measurement into an autonomic cloud intrusion detection framework using a hidden Markov prediction model. 2) Using hidden Markov models to detect sequences of anomalous behaviors in system logs that may indicate an attack plan over a period of time. The document provides background on hidden Markov models and how they can be applied to modeling threat sequences and states in a cloud system to provide early warnings of potential attacks.
Novel Malware Clustering System Based on Kernel Data Structureiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
TAXONOMY BASED INTRUSION ATTACKS AND DETECTION MANAGEMENT SCHEME IN PEER-TOPE...IJNSA Journal
A intrusion provides an unauthorized access, damage or disruption of the network. The process can understand the characteristics and nature of an intruder. The paper presents the taxonomy
consists of the specification of an intruder. Taxonomy provides the classification of intruder and provides mechanism for intruder detection. We found the algorithm for developing an intruder which can be attack at host system or network system. Here provide the mechanism for an intrusion by using the
system attribute and detection mechanism is based on knowledge and behavior of the system. Intrusiondetection mechanism using pattern based and threshold based mechanism for detecting an intruder. An intruder continuously monitored the network and host activities for detecting attack into the network and the task of intrusion-detection is also monitor the usage of such systems and detects the apparition of
insecure states.
This document provides a summary of a dissertation submitted by Thomas Parsons to the University of Dublin in partial fulfillment of the requirements for an MSc in Management of Information Systems in 2009. The dissertation examines the problem of false positives generated by non-heuristic anti-virus signatures at Symantec and develops a framework to prevent high severity false positives. Through analysis of root cause data from Symantec and interviews with Symantec experts, the dissertation identifies the leading causes of false positives and proposes a defect prevention approach based on software process improvement techniques to address the problem.
IRJET- A Review on Security Attacks in Biometric Authentication SystemsIRJET Journal
This document summarizes security attacks on biometric authentication systems. It discusses how biometric systems are vulnerable to different types of attacks, including attacks at the user interface, interfaces between modules, software modules, and the biometric template database. These attacks aim to compromise the biometric template and reduce system security. The document also reviews intrinsic system failures and adversary attacks as reasons for system failure. It concludes by outlining several countermeasures that can help resist different security attacks, such as liveness detection, biometric cryptosystems, steganography, watermarking, and cancellable biometrics.
A BAYESIAN CLASSIFICATION ON ASSET VULNERABILITY FOR REAL TIME REDUCTION OF F...IJNSA Journal
IT assets connected on internetwill encounter alien protocols and few parameters of protocol process are exposed as vulnerabilities. Intrusion Detection Systems (IDS) are installed to alerton suspicious traffic or activity. IDS issuesfalse positives alerts, if any behavior construe for partial attack pattern or the IDS lacks environment knowledge. Continuous monitoring of alerts to evolve whether, an alert is false positive or not is a major concern. In this paper we present design of an external module to IDS,to identify false positive alertsbased on anomaly based adaptive learning model. The novel feature of this design is that the system updates behavior profile of assets and environment with adaptive learning process.A mixture model is used for behavior modeling from reference data. The design of the detection and learning process are based on normal behavior and of environment. The anomaly alert identification algorithm isbuiltonSparse Markov Transducers (SMT) based probability.The total process is presented using real-time data. The Experimental results are validated and presentedwith reference to lab environment.
Classification of software security vulnerability no doubt facilitates the understanding of security-related information and accelerates vulnerability analysis. The lack of proper classification not only hinders its understanding but also renders the strategy of developing mitigation mechanism for clustered vulnerabilities. Now software developers and researchers are agreed on the fact that requirement and design phase of the software are the phases where security incorporation yields maximum benefits. In this paper we have attempted to design a classifier that can identify and classify design level vulnerabilities. In this classifier, first vulnerability classes are identified on the basis of well established security properties like authentication and authorization. Vulnerability training data is collected from various authentic sources like Common Weakness Enumeration (CWE), Common Vulnerabilities and Exposures (CVE) etc. From these databases only those vulnerabilities were included whose mitigation is possible at the design phase. Then this vulnerability data is pre-processed using various processes like text stemming, stop word removal, cases transformation. After pre-processing, SVM (Support Vector Machine) is used to classify vulnerabilities. Bootstrap validation is used to test and validate the classification process performed by the classifier. After training the classifier, a case study is conducted on NVD (National Vulnerability Database) design level vulnerabilities. Vulnerability analysis is done on the basis of classification result.
AN IMPROVED METHOD TO DETECT INTRUSION USING MACHINE LEARNING ALGORITHMSieijjournal
An intrusion detection system detects various malicious behaviors and abnormal activities that might harm
security and trust of computer system. IDS operate either on host or network level via utilizing anomaly
detection or misuse detection. Main problem is to correctly detect intruder attack against computer
network. The key point of successful detection of intrusion is choice of proper features. To resolve the
problems of IDS scheme this research work propose “an improved method to detect intrusion using
machine learning algorithms”. In our paper we use KDDCUP 99 dataset to analyze efficiency of intrusion
detection with different machine learning algorithms like Bayes, NaiveBayes, J48, J48Graft and Random
forest. To identify network based IDS with KDDCUP 99 dataset, experimental results shows that the three
algorithms J48, J48Graft and Random forest gives much better results than other machine learning
algorithms. We use WEKA to check the accuracy of classified dataset via our proposed method. We have
considered all the parameter for computation of result i.e. precision, recall, F – measure and ROC.
How can we predict vulnerabilities to prevent them from causing data lossesAbhishek BV
This document discusses various approaches to predict software vulnerabilities in order to prevent data losses. It begins by providing background on software vulnerabilities and their impacts. It then evaluates three main approaches: reactive, reliability, and proactive. The document favors the proactive approach, which predicts flaws before they occur. It summarizes three proposed solutions that take a proactive approach: 1) building a prediction model using software characteristics, 2) using source code static analyzers during development, and 3) employing a combination of statistical methods and code metrics throughout the software lifecycle. For each solution, it discusses the methodology, findings, advantages and disadvantages. Overall, the document analyzes different proactive approaches to vulnerability prediction in order to address how vulnerabilities can be
Novel Advances in Measuring and Preventing Software Security Weakness: Contin...theijes
Software weaknesses in design, architecture, code and deployment have led to software vulnerability exploited by the perpetrators. Although counter measure tools have been developed such as patch management systems, firewalls and antivirus, but the perpetrators have advance sophisticated tools such malware with crypto-lock and crypto-wall technologies. The current counter measures technologies are based on detection and respond model or risk management framework, which are no match to the attacker’s technologies based on speed technologies such as machine generated malwares and precision or stealth technologies such as command-andcontrol node malwares. Although lots of ink has been poured on advances in measuring and preventing software weakness on the detection and respond concept,this study is motivated to explore the state-of-art advances specifically on the novel concept of Continuous Trust Restoration (CTR). The Continuous Trust Restoration is a process of breaking down attacker’s activities kill chain and restoring the system trust. The CTR concept deploys speed, precision and stealth technologies on random route mutation, random host mutation, hypervisors, trust boot, software identities and software define infrastructure. Moreover, to deploy these technologies the study further explores a common security architectural framework with software metrics such as CVE (Common Vulnerability and Exposure), CWE (Common Weakness Enumeration), CVSS (Common Vulnerability Scoring System), CWSS (Common Weakness Scoring System), and CAPEC (Common Attack Pattern Enumeration and Classification). Finally, the study recommends a software security counter measures research paradigm shift from the current detection and respond models to Continuous Trust Restoration concept and from risk management frameworks to a Common Security Architectural Framework.
Distributed Self-organized Trust Management for Mobile Ad Hoc NetworksMehran Misaghi
Trust is a concept from the Social Sciences and can be defined as how much a node is willing to take the risk of trusting another one. The correct evaluation of the trust is crucial for several security mechanisms for Mobile Ad Hoc Networks (MANETs). However, the implementation of an effective trust evaluation scheme is very difficult in such networks, due to their dynamic characteristics. This work presents a trust evaluation scheme for MANETs based on a self-organized virtual trust network. To estimate the trustworthiness of other nodes, nodes form trust chains based on behavior evidences maintained within the trust network. Nodes periodically exchange their trust networks with the neighbors, providing an efficient method to disseminate trust information across the network. The scheme is fully distributed and self-organized, not requiring any trusted third party. Simulation results show that the scheme is very efficient on gathering evidences to build the trust networks. It also shows that the scheme has a very small communication and memory overhead. Besides, it is the first trust evaluation scheme evaluated under bad mouthing and newcomers attacks and it maintains its effectiveness in such scenarios.
This document summarizes a survey on balancing network load using geographic hash tables. It discusses how geographic hash tables are used to store and retrieve data from nodes in a wireless network. Two approaches to balancing the network load are proposed: 1) An analytical approach that adds new nodes to servers when load exceeds thresholds. 2) A heuristic approach that moves data between nodes to prevent any single node from receiving too many requests. The approaches aim to extend network life by distributing load more evenly without changing underlying georouting protocols.
1. The document examines a local Nigerian game called "tsorry checkerboard" and applies group theory concepts.
2. The game is played on a 2x2 board with each player having up to 3 pieces, and the goal is to line up all three of one's pieces horizontally, vertically, or diagonally.
3. The possible moves of each piece (vertical, horizontal, diagonal, or staying in place) form a Klein four-group, satisfying the group properties of closure, associativity, identity, and inverses. Therefore, group theory can be applied to model the game.
This document proposes laying fiber-optic cables underwater along the River Nile to connect cities in Sudan. It notes that the River Nile and its tributaries pass through many Sudanese cities, providing a natural pathway. Laying cables underwater would be more cost-effective than overland routes due to avoiding expenses like drilling and land permits. It suggests constructing monitoring centers every 100km and connecting cities within that distance to create a network with significant cost savings over traditional methods. In conclusion, an underwater fiber-optic network along the River Nile could efficiently connect inland Sudanese cities and provide benefits over satellite or overland cable routes.
This document describes the design of an electronic voting response system for use in parliamentary settings. The system was designed to address issues with traditional voice voting methods, such as time wasted, inaccuracy in discerning the majority view, and potential for bias or conflict. The system uses an ATmega328P microcontroller to control wireless keypads with yes and no buttons and a liquid crystal display output unit. Each keypad transmits votes via radio frequency to the display unit. The system is a prototype designed for two respondents but could be scaled up at low cost. It aims to provide a more efficient, accurate and unbiased method for capturing parliamentary votes.
The document describes a method for image fusion and optimization using stationary wavelet transform and particle swarm optimization. It summarizes that image fusion combines information from multiple images to extract relevant information. The proposed method uses stationary wavelet transform for image decomposition and particle swarm optimization to optimize the fused results. It applies stationary wavelet transform to source images to decompose them into wavelet coefficients. Particle swarm optimization is then used to optimize the transformed images. The inverse stationary wavelet transform is applied to the optimized coefficients to generate the fused image. The method is tested on various images and performance is evaluated using metrics like peak signal-to-noise ratio, entropy, mean square error and standard deviation.
MACHINE LEARNING IN NETWORK SECURITY USING KNIME ANALYTICSIJNSA Journal
Machine learning has more and more effect on our every day’s life. This field keeps growing and expanding into new areas. Machine learning is based on the implementation of artificial intelligence that gives systems the capability to automatically learn and enhance from experiments without being explicitly programmed. Machine Learning algorithms apply mathematical equations to analyze datasets and predict values based on the dataset. In the field of cybersecurity, machine learning algorithms can be utilized to train and analyze the Intrusion Detection Systems (IDSs) on security-related datasets. In this paper, we tested different machine learning algorithms to analyze NSL-KDD dataset using KNIME analytics.
Machine learning in network security using knime analyticsIJNSA Journal
Machine learning has more and more effect on our every day’s life. This field keeps growing and expanding into new areas. Machine learning is based on the implementation of artificial intelligence that gives systems the capability to automatically learn and enhance from experiments without being explicitly
programmed. Machine Learning algorithms apply mathematical equations to analyze datasets and predict values based on the dataset. In the field of cybersecurity, machine learning algorithms can be utilized to train and analyze the Intrusion Detection Systems (IDSs) on security-related datasets. In this paper, we tested different machine learning algorithms to analyze NSL-KDD dataset using KNIME analytics.
Adversarial Attacks and Defenses in Malware Classification: A SurveyCSCJournals
As malware continues to grow more sophisticated and more plentiful - traditional signature and heuristics-based defenses no longer cut it. Instead, the industry has recently turned to using machine learning for malicious file detection. The challenge with this approach is that machine learning itself comes with vulnerabilities - and if left unattended presents a new attack surface for attackers to exploit.
In this paper we present a survey of research in the area of machine learning-based malware classifiers, the attacks they encounter, and the defensive measures available. We start by reviewing recent advances in malware classification, including the most important works using deep learning. We then discuss in detail the field of adversarial machine learning and conduct an exhaustive review of adversarial attacks and defenses in the field of malware classification.
This document discusses the potential threat of a "Superworm", a theoretical worm that could incorporate successful propagation techniques from past worms to spread rapidly and cause widespread damage. It describes the features such a worm may have, including exploiting multiple vulnerabilities across many operating systems and using various proliferation methods. The document also examines a past university network security incident and two security technologies that could help detect and limit the spread of such a worm: an early worm detection system and a modified reverse proxy server.
A Behavior Based Intrusion Detection System Using Machine Learning AlgorithmsCSCJournals
Humans are consistently referred to as the weakest link in information security. Human factors such as individual differences, cognitive abilities and personality traits can impact on behavior and play a significant role in information security. The purpose of this study is to identify, describe and classify the human factors affecting Information Security and develop a model to reduce the risk of insider misuse and assess the use and performance of the best-suited artificial intelligence techniques in detection of misuse. More specifically, this study provides a comprehensive view of the human related information security risks and threats, classification study of the human related threats in information security, a methodology developed to reduce the risk of human related threats by detecting insider misuse by a behavior-based intrusion detection system using machine learning algorithms, and the comparison of the numerical experiments for analysis of this approach. Specifically, by using the machine learning algorithm with the best learning performance, the detection rates of the attack types defined in the organized five dimensional human threats taxonomy were determined. Lastly, the possible human factors affecting information security as linked to the detection rates were sorted upon the evaluation of the taxonomy.
Vulnerability Analysis of 802.11 Authentications and Encryption Protocols: CV...AM Publications
This paper analysis vulnerability of known attacks on WLAN cipher suite, authentication mechanisms and credentials using common vulnerability scoring system (CVSS).
A Survey on Hidden Markov Model (HMM) Based Intention Prediction TechniquesIJERA Editor
This document summarizes a research paper on using hidden Markov models to predict security threats and attacks in cloud computing systems. It discusses two approaches: 1) Integrating ongoing attack detection, automatic prevention actions, and risk measurement into an autonomic cloud intrusion detection framework using a hidden Markov prediction model. 2) Using hidden Markov models to detect sequences of anomalous behaviors in system logs that may indicate an attack plan over a period of time. The document provides background on hidden Markov models and how they can be applied to modeling threat sequences and states in a cloud system to provide early warnings of potential attacks.
Novel Malware Clustering System Based on Kernel Data Structureiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
TAXONOMY BASED INTRUSION ATTACKS AND DETECTION MANAGEMENT SCHEME IN PEER-TOPE...IJNSA Journal
A intrusion provides an unauthorized access, damage or disruption of the network. The process can understand the characteristics and nature of an intruder. The paper presents the taxonomy
consists of the specification of an intruder. Taxonomy provides the classification of intruder and provides mechanism for intruder detection. We found the algorithm for developing an intruder which can be attack at host system or network system. Here provide the mechanism for an intrusion by using the
system attribute and detection mechanism is based on knowledge and behavior of the system. Intrusiondetection mechanism using pattern based and threshold based mechanism for detecting an intruder. An intruder continuously monitored the network and host activities for detecting attack into the network and the task of intrusion-detection is also monitor the usage of such systems and detects the apparition of
insecure states.
This document provides a summary of a dissertation submitted by Thomas Parsons to the University of Dublin in partial fulfillment of the requirements for an MSc in Management of Information Systems in 2009. The dissertation examines the problem of false positives generated by non-heuristic anti-virus signatures at Symantec and develops a framework to prevent high severity false positives. Through analysis of root cause data from Symantec and interviews with Symantec experts, the dissertation identifies the leading causes of false positives and proposes a defect prevention approach based on software process improvement techniques to address the problem.
IRJET- A Review on Security Attacks in Biometric Authentication SystemsIRJET Journal
This document summarizes security attacks on biometric authentication systems. It discusses how biometric systems are vulnerable to different types of attacks, including attacks at the user interface, interfaces between modules, software modules, and the biometric template database. These attacks aim to compromise the biometric template and reduce system security. The document also reviews intrinsic system failures and adversary attacks as reasons for system failure. It concludes by outlining several countermeasures that can help resist different security attacks, such as liveness detection, biometric cryptosystems, steganography, watermarking, and cancellable biometrics.
A BAYESIAN CLASSIFICATION ON ASSET VULNERABILITY FOR REAL TIME REDUCTION OF F...IJNSA Journal
IT assets connected on internetwill encounter alien protocols and few parameters of protocol process are exposed as vulnerabilities. Intrusion Detection Systems (IDS) are installed to alerton suspicious traffic or activity. IDS issuesfalse positives alerts, if any behavior construe for partial attack pattern or the IDS lacks environment knowledge. Continuous monitoring of alerts to evolve whether, an alert is false positive or not is a major concern. In this paper we present design of an external module to IDS,to identify false positive alertsbased on anomaly based adaptive learning model. The novel feature of this design is that the system updates behavior profile of assets and environment with adaptive learning process.A mixture model is used for behavior modeling from reference data. The design of the detection and learning process are based on normal behavior and of environment. The anomaly alert identification algorithm isbuiltonSparse Markov Transducers (SMT) based probability.The total process is presented using real-time data. The Experimental results are validated and presentedwith reference to lab environment.
Classification of software security vulnerability no doubt facilitates the understanding of security-related information and accelerates vulnerability analysis. The lack of proper classification not only hinders its understanding but also renders the strategy of developing mitigation mechanism for clustered vulnerabilities. Now software developers and researchers are agreed on the fact that requirement and design phase of the software are the phases where security incorporation yields maximum benefits. In this paper we have attempted to design a classifier that can identify and classify design level vulnerabilities. In this classifier, first vulnerability classes are identified on the basis of well established security properties like authentication and authorization. Vulnerability training data is collected from various authentic sources like Common Weakness Enumeration (CWE), Common Vulnerabilities and Exposures (CVE) etc. From these databases only those vulnerabilities were included whose mitigation is possible at the design phase. Then this vulnerability data is pre-processed using various processes like text stemming, stop word removal, cases transformation. After pre-processing, SVM (Support Vector Machine) is used to classify vulnerabilities. Bootstrap validation is used to test and validate the classification process performed by the classifier. After training the classifier, a case study is conducted on NVD (National Vulnerability Database) design level vulnerabilities. Vulnerability analysis is done on the basis of classification result.
AN IMPROVED METHOD TO DETECT INTRUSION USING MACHINE LEARNING ALGORITHMSieijjournal
An intrusion detection system detects various malicious behaviors and abnormal activities that might harm
security and trust of computer system. IDS operate either on host or network level via utilizing anomaly
detection or misuse detection. Main problem is to correctly detect intruder attack against computer
network. The key point of successful detection of intrusion is choice of proper features. To resolve the
problems of IDS scheme this research work propose “an improved method to detect intrusion using
machine learning algorithms”. In our paper we use KDDCUP 99 dataset to analyze efficiency of intrusion
detection with different machine learning algorithms like Bayes, NaiveBayes, J48, J48Graft and Random
forest. To identify network based IDS with KDDCUP 99 dataset, experimental results shows that the three
algorithms J48, J48Graft and Random forest gives much better results than other machine learning
algorithms. We use WEKA to check the accuracy of classified dataset via our proposed method. We have
considered all the parameter for computation of result i.e. precision, recall, F – measure and ROC.
How can we predict vulnerabilities to prevent them from causing data lossesAbhishek BV
This document discusses various approaches to predict software vulnerabilities in order to prevent data losses. It begins by providing background on software vulnerabilities and their impacts. It then evaluates three main approaches: reactive, reliability, and proactive. The document favors the proactive approach, which predicts flaws before they occur. It summarizes three proposed solutions that take a proactive approach: 1) building a prediction model using software characteristics, 2) using source code static analyzers during development, and 3) employing a combination of statistical methods and code metrics throughout the software lifecycle. For each solution, it discusses the methodology, findings, advantages and disadvantages. Overall, the document analyzes different proactive approaches to vulnerability prediction in order to address how vulnerabilities can be
Novel Advances in Measuring and Preventing Software Security Weakness: Contin...theijes
Software weaknesses in design, architecture, code and deployment have led to software vulnerability exploited by the perpetrators. Although counter measure tools have been developed such as patch management systems, firewalls and antivirus, but the perpetrators have advance sophisticated tools such malware with crypto-lock and crypto-wall technologies. The current counter measures technologies are based on detection and respond model or risk management framework, which are no match to the attacker’s technologies based on speed technologies such as machine generated malwares and precision or stealth technologies such as command-andcontrol node malwares. Although lots of ink has been poured on advances in measuring and preventing software weakness on the detection and respond concept,this study is motivated to explore the state-of-art advances specifically on the novel concept of Continuous Trust Restoration (CTR). The Continuous Trust Restoration is a process of breaking down attacker’s activities kill chain and restoring the system trust. The CTR concept deploys speed, precision and stealth technologies on random route mutation, random host mutation, hypervisors, trust boot, software identities and software define infrastructure. Moreover, to deploy these technologies the study further explores a common security architectural framework with software metrics such as CVE (Common Vulnerability and Exposure), CWE (Common Weakness Enumeration), CVSS (Common Vulnerability Scoring System), CWSS (Common Weakness Scoring System), and CAPEC (Common Attack Pattern Enumeration and Classification). Finally, the study recommends a software security counter measures research paradigm shift from the current detection and respond models to Continuous Trust Restoration concept and from risk management frameworks to a Common Security Architectural Framework.
Distributed Self-organized Trust Management for Mobile Ad Hoc NetworksMehran Misaghi
Trust is a concept from the Social Sciences and can be defined as how much a node is willing to take the risk of trusting another one. The correct evaluation of the trust is crucial for several security mechanisms for Mobile Ad Hoc Networks (MANETs). However, the implementation of an effective trust evaluation scheme is very difficult in such networks, due to their dynamic characteristics. This work presents a trust evaluation scheme for MANETs based on a self-organized virtual trust network. To estimate the trustworthiness of other nodes, nodes form trust chains based on behavior evidences maintained within the trust network. Nodes periodically exchange their trust networks with the neighbors, providing an efficient method to disseminate trust information across the network. The scheme is fully distributed and self-organized, not requiring any trusted third party. Simulation results show that the scheme is very efficient on gathering evidences to build the trust networks. It also shows that the scheme has a very small communication and memory overhead. Besides, it is the first trust evaluation scheme evaluated under bad mouthing and newcomers attacks and it maintains its effectiveness in such scenarios.
This document summarizes a survey on balancing network load using geographic hash tables. It discusses how geographic hash tables are used to store and retrieve data from nodes in a wireless network. Two approaches to balancing the network load are proposed: 1) An analytical approach that adds new nodes to servers when load exceeds thresholds. 2) A heuristic approach that moves data between nodes to prevent any single node from receiving too many requests. The approaches aim to extend network life by distributing load more evenly without changing underlying georouting protocols.
1. The document examines a local Nigerian game called "tsorry checkerboard" and applies group theory concepts.
2. The game is played on a 2x2 board with each player having up to 3 pieces, and the goal is to line up all three of one's pieces horizontally, vertically, or diagonally.
3. The possible moves of each piece (vertical, horizontal, diagonal, or staying in place) form a Klein four-group, satisfying the group properties of closure, associativity, identity, and inverses. Therefore, group theory can be applied to model the game.
This document proposes laying fiber-optic cables underwater along the River Nile to connect cities in Sudan. It notes that the River Nile and its tributaries pass through many Sudanese cities, providing a natural pathway. Laying cables underwater would be more cost-effective than overland routes due to avoiding expenses like drilling and land permits. It suggests constructing monitoring centers every 100km and connecting cities within that distance to create a network with significant cost savings over traditional methods. In conclusion, an underwater fiber-optic network along the River Nile could efficiently connect inland Sudanese cities and provide benefits over satellite or overland cable routes.
This document describes the design of an electronic voting response system for use in parliamentary settings. The system was designed to address issues with traditional voice voting methods, such as time wasted, inaccuracy in discerning the majority view, and potential for bias or conflict. The system uses an ATmega328P microcontroller to control wireless keypads with yes and no buttons and a liquid crystal display output unit. Each keypad transmits votes via radio frequency to the display unit. The system is a prototype designed for two respondents but could be scaled up at low cost. It aims to provide a more efficient, accurate and unbiased method for capturing parliamentary votes.
The document describes a method for image fusion and optimization using stationary wavelet transform and particle swarm optimization. It summarizes that image fusion combines information from multiple images to extract relevant information. The proposed method uses stationary wavelet transform for image decomposition and particle swarm optimization to optimize the fused results. It applies stationary wavelet transform to source images to decompose them into wavelet coefficients. Particle swarm optimization is then used to optimize the transformed images. The inverse stationary wavelet transform is applied to the optimized coefficients to generate the fused image. The method is tested on various images and performance is evaluated using metrics like peak signal-to-noise ratio, entropy, mean square error and standard deviation.
This document describes the design and fabrication of a prototype for testing the durability of seat belt retractors. The current testing machine has limitations like not being able to test retractors at different mounting angles and extract/retract lengths. The new prototype aims to address these using a stepper motor, spring system instead of bungee, and sensors to detect the snatch produced every 4 cycles. It involves calculations to select components like the pneumatic cylinder, FRL unit, spring, and shaft based on the required forces, strokes and flows. A 3D model was developed and simulations conducted to validate the design. The physical prototype was then fabricated to cater to increasing demand for seat belt testing.
This document proposes a bandwidth degradation technique to reduce call dropping probability in mobile
networks. It aims to dynamically adjust bandwidth allocation to multiple users according to network conditions
to increase utilization. The technique allows for degrading the quality of existing calls to admit new calls
while maintaining quality of service. Key performance metrics analyzed include degradation ratio, degraded
bandwidth, throughput, and propagation delay. The approach is intended to be implemented using MATLAB
to simulate various mobility patterns for verification.
This document summarizes a study on the body composition of children participating in regular football, cricket, and gymnastics training. The study aimed to compare the anthropometric and body composition status of children in these three sports. Body composition measurements including body fat percentage, fat mass, and lean mass were taken for children in each sport. Statistical analysis found that footballers had significantly lower body fat percentage and fat mass than cricketers but did not differ significantly in lean mass. Footballers also had significantly lower body fat percentage and fat mass than cricketers as well as significantly higher lean mass. Gymnasts had significantly lower body fat percentage and fat mass than cricketers but did not differ significantly in lean mass. The study concluded that footballers generally had a better body
Prediction of Fault in Distribution Transformer using Adaptive Neural-Fuzzy I...ijsrd.com
In this paper, we present a new method for simultaneous diagnosis of fault in distribution transformer. It uses an adaptive neuro-fuzzy inference system (ANFIS), based on Dissolved Gas Analysis (DGA). The ANFIS is first “trained†in accordance with IEC 599, so that it acquires some fault determination ability. The CO2/CO ratios are then considered additional input data, enabling simultaneous diagnosis of the type and location of the fault. Diagnosis techniques based on the Dissolved Gas Analysis (DGA) have been developed to detect incipient faults in distribution transformers. The quantity of the dissolved gas depends fundamentally on the types of faults occurring within distribution transformers. By considering these characteristics, Dissolved Gas Analysis (DGA) methods make it possible to detect the abnormality of the transformers. This can be done by comparing the Dissolved Gas Analysis (DGA) of the transformer under surveillance with the standard one. This idea provides the use of adaptive neural fuzzy technique in order to better predict oil conditions of a transformer. The proposed method can forecast the possible faults which can be occurred in the transformer. This idea can be used for maintenance purpose in the technology where distributed transformer plays a significant role such as when the energy is to be distributed in a large region.
This document discusses the design, analysis, and feasibility testing of a center-mounted suspension system. It begins with an introduction to conventional suspension systems and their limitations. The proposed center-mounted system aims to improve vehicle balance in all terrains by directly attaching the suspension to the vehicle's central chassis. The document then reviews different suspension system types and analyzes the proposed system's working principles and mathematical calculations. Finally, stress analysis using ANSYS software demonstrates the advantages of the center-mounted design in absorbing shocks during turns and on bumpy roads. In conclusion, the proposed system maintains vehicle balance better than conventional designs through its unique center-attached configuration.
This document provides a review of information technology implementation for the educational development of rural India. It discusses several key points:
1) It provides an overview of the Indian education system, including the roles of public and private sectors as well as various supporting institutions.
2) It identifies several problems faced by students in rural areas, such as lack of adequate teachers and infrastructure like classrooms and toilets.
3) It discusses how information and communication technologies (ICT) like computers, internet, mobile phones can help improve quality of education through distance learning programs and training teachers.
4) It outlines several approaches that have been used to promote education in rural India using ICT, including village knowledge centers, e
This document describes a sketch-based image retrieval system that uses freehand sketches as queries to retrieve similar colored images from a database. The system first extracts features like color, texture, and shape from the sketch using descriptors such as Color and Edge Directivity Descriptor (CEDD) and Edge Histogram Descriptor (EHD). It then clusters the images in the database using k-means clustering based on the similarity of their features to the sketch. Finally, the system retrieves the most similar colored image from the clustered images as the output match for the user's sketch query.
This document compares the performance of IPv4 and IPv6 over MPLS networks. It summarizes the results of simulations run using OPNET 14.5 that evaluated packet delay, packet loss, and throughput for IPv4 and IPv6 over MPLS. The simulations found that IPv6 over MPLS exhibited higher packet loss, higher throughput, and higher delay compared to IPv4 over MPLS which had lower throughput and delay with less dropped packets. Therefore, IPv6 may be suitable for applications requiring high bandwidth but not for real-time applications due to its higher delays and packet loss.
This document analyzes the capacity of MIMO wireless channels when accounting for impairments from physical transceiver hardware limitations. It is shown that when including the effects of transceiver impairments like non-linearities, phase noise, and quantization noise, the capacity of MIMO channels reaches a finite limit as SNR increases, rather than increasing without bound. This results in a zero multiplexing gain, unlike the ideal case without impairments. However, the relative capacity increase from MIMO over single-antenna channels remains at least as large when including impairments. Various figures are presented showing the capacity and multiplexing gain for different channel models and transceiver configurations. The document concludes by stating the analysis provides insights into understanding
The document discusses securing biometric templates when transmitted over non-secure channels by selecting partial fingerprint and iris data, encrypting it using AES with an iris hash as the key, and transmitting the encrypted data. It outlines the need to protect biometric data due to risks of identity theft if templates are compromised. Various attacks on biometric systems and methods of template protection including cryptography and cancelable biometrics are also reviewed.
This document summarizes a study that analyzed the performance of vertical skirted strip footings on slopes using the finite element software PLAXIS 2D. Various parameters were considered, including the vertical load, depth of footing embedment, distance of footing from crest, ratio of skirt depth to footing width, and configuration of the skirt (one side, both sides, unequal sides). The results showed that skirted foundations significantly improved the bearing capacity compared to unskirted foundations. Bearing capacity increased with deeper skirt depths. Footings at the crest also showed improved bearing capacity. Footing embedment depth did not affect bearing capacity. The study provides insights into using skirted foundations to improve slope stability and bearing capacity
This document studied the nasal parameters of two ethnic groups in Nigeria - the Ibibio and Yakurr peoples. It measured the nasal length, width and indices of 400 subjects (200 from each group, split evenly between males and females). The results found significant ethnic and gender differences in all nasal parameters. Specifically, Ibibio males had platyrrhine noses while Ibibio females had mesorrhine noses. Yakurr males had mesorrhine noses while Yakurr females had platyrrhine noses. Nasal indices thus varied significantly between the groups and could be useful for ethnic and gender differentiation.
This document summarizes research into improving transient stability in power transmission systems using a Static VAR Compensator (SVC) with a hybrid PI-Fuzzy Logic controller. It begins with an introduction to Flexible AC Transmission Systems (FACTS) and the role of SVC devices in voltage control and reactive power compensation. It then describes modeling an SVC and the operating principles of conventional PI control. The limitations of PI control for nonlinear systems are discussed. The document proposes a hybrid PI-Fuzzy Logic controller to combine the advantages of both. Simulation results using MATLAB on a 2-machine 3-bus test system show the hybrid controller improves performance during disturbances over PI or Fuzzy Logic control alone.
This document provides details on calculating various losses that occur in high voltage underground power cables, including dielectric losses, conductor losses, and sheath losses. It presents formulas to calculate voltage-dependent and current-dependent dielectric losses, as well as ohmic conductor losses and sheath eddy current and circulating current losses. The document also provides methods to calculate cable parameters like inductance, impedance, and mutual impedances between conductors and screen. It describes using these calculations and ETAP modeling to analyze losses in an existing 33kV cable network and determine that installing VAR compensators could reduce total daily power losses by approximately 2471 kW.
This document provides a review of optimization techniques for the wire electrical discharge machining (WEDM) process. It begins with an introduction to WEDM, describing the working principle and important process parameters like pulse width, time between pulses, servo reference voltage, and wire tension. The document then reviews literature on optimization methods that have been used to maximize material removal rate while minimizing electrode wear rate. Specifically, it discusses two studies that used Taguchi's design of experiments approach and desirability functions to optimize cutting conditions for different materials like minimizing wear rate and maximizing material removal rate in WEDM.
Malware Risk Analysis on the Campus Network with Bayesian Belief NetworkIJNSA Journal
A security network management system is for providing clear guidelines on risk evaluation and assessment for enterprise networks. The threat and risk assessment is conducted to safeguard enterprise network services to maintain system confidentiality, integrity, and availability through effective control strategies. In this paper, based on our previous work in analyzing integrated information security management and malware propagation on the campus network through mathematical modelling, we proposed Bayesian Belief Network with inference level indicator to enable the decision maker to understand and provide appropriate mitigation decisions on the risks posed. We experimentally placed monitoring sensors on the campus network that gives the threat alert priority levels and magnitude on the vulnerable information assets. These methods will give a direction on the belief inferred due to malware prevalence on the information security assets for better understanding.
Exploration Draft Document- CEM Machine Learning & AI Project 2018Leslie McFarlin
Draft document to present findings of exploratory work on the incorporation of machine learning and AI into an existing data security product. The project was abandoned due to conflicting work done by product management.
CROs must be part of the cybersecurity solution by david x martinDavid X Martin
Chief risk officers must play a more integral role in companies' cybersecurity strategies. They should adopt a defense-in-depth approach using multiple security techniques to slow attackers. They also need to take an intelligence-driven approach, continuously adapting based on intelligence and incidents. Chief risk officers should treat cybersecurity as an enterprise risk management issue with three lines of defense - prevention, oversight, and response. Innovation is also needed in access management, distributed systems, and artificial intelligence for threat identification and recovery.
A Survey of Security of Multimodal Biometric SystemsIJERA Editor
A biometric system is essentially a pattern recognition system being used in adversarial environment. Since,
biometric system like any conventional security system is exposed to malicious adversaries, who can manipulate
data to make the system ineffective by compromising its integrity. Current theory and design methods of
biometric systems do not take into account the vulnerability to such adversary attacks. Therefore, evaluation of
classical design methods is an open problem to investigate whether they lead to design secure systems. In order
to make biometric systems secure it is necessary to understand and evaluate the threats and to thus develop
effective countermeasures and robust system designs, both technical and procedural, if necessary. Accordingly,
the extension of theory and design methods of biometric systems is mandatory to safeguard the security and
reliability of biometric systems in adversarial environments.
Harnessing the Power of Machine Learning in Cybersecurity.pdfCIOWomenMagazine
Combat Machine Learning in Cybersecurity! Explore applications, benefits, & challenges of ML in cybersecurity for improved detection, response, & resilience.
SECURING THE DIGITAL FORTRESS: ADVERSARIAL MACHINE LEARNING CHALLENGES AND CO...IRJET Journal
This document discusses adversarial machine learning challenges and countermeasures in cybersecurity. It begins by introducing the topic of adversarial machine learning and its threats to cybersecurity systems that incorporate machine learning models. It then reviews related literature on adversarial attacks against machine learning systems. The document explores different types of adversarial attacks, such as evasion attacks and poisoning attacks, and provides real-world examples. It also discusses the motivations and goals of adversaries launching these attacks. Finally, it delves into common attack algorithms and methods used to generate adversarial examples.
The document discusses securing and protecting information systems through proper authentication processes and policies. It describes how today's authentication methods must be more secure to protect against threats like password hacking and impersonation. Effective security policies clearly define roles and responsibilities, and use techniques like mandatory access control, role-based access control, and multifactor authentication to regulate access to systems and data. Proper user training and system monitoring are also needed to counter evolving cyber threats.
Security evaluation of pattern classifiers under attack Papitha Velumani
This document discusses evaluating the security of pattern classification systems that are vulnerable to attacks. It proposes a framework for empirically evaluating classifier security that formalizes approaches from literature. This framework models the adversary's goal, knowledge and capabilities. It also models how attacks may affect training and test data distributions differently. Evaluating classifier security in this way provides a more complete understanding of performance in adversarial environments and can lead to better design choices.
Artificial intelligence is rapidly transforming the technological landscape, enhancing efficiency and precision across numerous sectors. However, the rise of AI and machine learning systems has also introduced a new set of security threats, making the development of advanced security techniques for AI systems more critical than ever.
AN ISP BASED NOTIFICATION AND DETECTION SYSTEM TO MAXIMIZE EFFICIENCY OF CLIE...IJNSA Journal
End users are increasingly vulnerable to attacks directed at web browsers which make the most of popularity of today’s web services. While organizations deploy several layers of security to protect their systems and data against unauthorised access, surveys reveal that a large fraction of end users do not utilize and/or are not familiar with any security tools. End users’ hesitation and unfamiliarity with security products contribute vastly to the number of online DDoS attacks, malware and Spam distribution. This work on progress paper proposes a design focused on the notion of increased participation of internet service providers in protecting end users. The proposed design takes advantage of three different detection tools to identify the maliciousness of a website content and alerts users through utilising Internet Content Adaptation Protocol (ICAP) by an In-Browser cross-platform messaging system. The system also incorporates the users’ online behaviour analysis to minimize the scanning intervals of malicious websites database by client honeypots. Findings from our proof of concept design and other research indicate that such a design can provide a reliable hybrid detection mechanism while introducing low delay time into user browsing experience.
An Empirical Study on the Security Measurements of Websites of Jordanian Publ...CSCJournals
Most of the Jordanian universities’ inquiries systems, i.e. educational, financial, administrative, and research systems are accessible through their campus networks. As such, they are vulnerable to security breaches that may compromise confidential information and expose the universities to losses and other risks. At Jordanian universities, security is critical to the physical network, computer operating systems, and application programs and each area has its own set of security issues and risks. This paper presents a comparative study on the security systems at the Jordanian universities from the viewpoint of prevention and intrusion detection. Robustness testing techniques are used to assess the security and robustness of the universities’ online services. In this paper, the analysis concentrates on the distribution of vulnerability categories and identifies the mistakes that lead to a severe type of vulnerability. The distribution of vulnerabilities can be used to avoid security flaws and mistakes.
Progress of Machine Learning in the Field of Intrusion Detection Systemsijcisjournal
This document summarizes a research paper that proposes a new support vector machine (SVM) model for intrusion detection systems. The paper begins with background on machine learning and SVMs for intrusion detection. It then discusses related work applying SVMs and feature selection to intrusion detection. The proposed solution uses the CICIDS2017 dataset to select important features and train an SVM classifier to detect attacks. Testing shows the model achieves 97-99% precision in detecting attacks from the dataset. The results demonstrate the effectiveness of the proposed SVM model for intrusion detection.
PROGRESS OF MACHINE LEARNING IN THE FIELD OF INTRUSION DETECTION SYSTEMSijcisjournal
With the growth in the use of the Internet and local area networks, malicious attacks and intrusions into
computer systems are increasing. Implementing intrusion detection systems have become extremely
important to help maintain good network security. Support vector machines (SVMs), a classic pattern
recognition tool, have been widely used in intrusion detection. They can handle very large data with high
efficiency, are easy to use, and exhibit good prediction behavior. This paper presents a new SVM model
enriched with a Gaussian kernel function based on the features of the training data for intrusion detection.
The new model is tested with the CICIDS2017 dataset. The test proves better results in terms of detection
efficiency and false alarm rate, which can give better coverage and make detection more efficient.
11421ijcPROGRESS OF MACHINE LEARNING IN THE FIELD OF INTRUSION DETECTION SYST...ijcisjournal
With the growth in the use of the Internet and local area networks, malicious attacks and intrusions into computer systems are increasing. Implementing intrusion detection systems have become extremely important to help maintain good network security. Support vector machines (SVMs), a classic pattern recognition tool, have been widely used in intrusion detection. They can handle very large data with high efficiency, are easy to use, and exhibit good prediction behavior. This paper presents a new SVM model enriched with a Gaussian kernel function based on the features of the training data for intrusion detection. The new model is tested with the CICIDS2017 dataset. The test proves better results in terms of detection efficiency and false alarm rate, which can give better coverage and make detection more efficient.
With the development and rapid growth in IT infrastructure, malicious code attacks are considered as the
main threat to cybersecurity. Malicious JavaScript’s which are intentionally crafted by the attackers inside the web page
over the web as an emerging security issue affecting millions of users. In past few years, a number of studies have been
conducted based on machine learning for detection of malicious JavaScript code attacks has demonstrated a poor
detection accuracy and increased performance overheads. In this paper, an effective interceptor approach for detection of
multivariate and novel malicious JavaScript’s based on deep learning is proposed and evaluated. Hybrid feature set based
on static and dynamic analysis were used. The dataset which was used in this study consists of 32,000 benign webpages
and 12,900 malicious pages. The experimental results show that this approach was able to detect 99.01% of new malicious
code variants.
International Journal of Computer Science and Information Security,IJCSIS ISSN 1947-5500, Pittsburgh, PA, USA
Email: ijcsiseditor@gmail.com
http://sites.google.com/site/ijcsis/
https://google.academia.edu/JournalofComputerScience
https://www.linkedin.com/in/ijcsis-research-publications-8b916516/
http://www.researcherid.com/rid/E-1319-2016
Application Threat Modeling In Risk ManagementMel Drews
How to perform threat modeling of software to protect your business, critical assets and communicate your message to your boss and the Board of Directors
The Transformative Role of Artificial Intelligence in Cybersecuritycyberprosocial
In an era dominated by digitization, the rise of Artificial Intelligence (AI) has been a game-changer in various domains. One area where AI has particularly shone is in the realm of cybersecurity. As the digital landscape expands, so do the threats associated with Artificial Intelligence in cybersecurity
Similar to J018127176.publishing paper of mamatha (1) (20)
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
J018127176.publishing paper of mamatha (1)
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 1, Ver. II (Jan – Feb. 2016), PP 71-76
www.iosrjournals.org
DOI: 10.9790/0661-18127176 www.iosrjournals.org 71 | Page
Classifying Evaluation Secure Patterns under Attacks
Mitnasala Mamatha1
, Hussain Syed2
#1Student of M. Tech(CSE) and #2 Asst.Prof, Department of Computer Science and Engineering, QIS Institute
of technology, Ongole.
Abstract: Pattern Classification is one division for machine discovering that spotlights on acknowledgment of
examples and regularities in information. In antagonistic applications such as biometric verification, spam
sifting, system interruption identification the example grouping frameworks are utilized. Design arrangement
frameworks might display vulnerabilities if antagonistic situation is not considered. Multimodal biometric
frameworks are heartier to parodying assaults, as they consolidate data originating from various biometric
characteristics. Assess the security of example classifiers that formalizes and sums up the fundamental thoughts
proposed in the writing and give samples of its utilization in three genuine applications. We propose a structure
for assessment of example security, model of foe for characterizing any assault situation. Reported results
demonstrate that security assessment can give a more finish comprehension of the classifier's conduct in
antagonistic situations, and lead to better plan decisions.
Index Terms: Data mining, Pattern characterization, Model of Adversary.
I. Introduction:
In example request structures machine learning estimations are used to perform security-related
applications like biometric approval, framework intrusion area, and spam filtering, to perceive a "real" and a
"noxious" sample class. The data can be purposely controlled by an adversary to make classifiers to convey
false negative. Regardless of standard ones, these Applications have a characteristic opposing nature since the
information data can be purposefully controlled by a keen and adaptable adversary to undermine classifier
operation. This often offers climb to a defenses challenge between the enemy and the classifier organizer.
Doubtlessly comprehended specimens of strikes against case classifiers are: exhibiting a fake biometric
trademark to a biometric affirmation structure (deriding ambush) .Well known instances of attacks are: Spoofing
strikes where one individual or program deliberately distorting data and accordingly getting an illegitimate
purpose of inclination altering framework groups fitting in with intrusive development controlling substance of
messages adjusting framework packages having a spot with meddling movement. Not well arranged machine
learning is an examination field that lies at the meeting of machine learning and PC security. It hopes to engage
the protected choice of machine learning systems in will-arranged settings like spam filtering, malware
recognizable proof and biometric affirmation. Tests include: strikes in spam isolating, where spam messages are
waded through erroneous spelling of dreadful words or insertion of good words; ambushes in PC security, e.g.,
to disorder malware code within framework packages or beguile signature acknowledgment; attacks in
biometric affirmation, where fake biometric attributes might be mishandled to copy a bona fide customer
(biometric mocking) or to exchange off customers' organization shows that are adaptively updated over
time.[16] To fathom the security properties of learning computations in opposing settings, one should address
the going with major issues:
i. recognizing potential vulnerabilities of machine learning computations in the midst of learning and request;
ii. Figuring legitimate attacks that identify with the recognized perils and evaluating their impact on the
concentrated on structure;
iii. Proposing countermeasures to improve the security of machine learning estimations against the considered
attacks.
Fig. 1 Email Types
2. Classifying Evaluation Secure Patterns under Attacks
DOI: 10.9790/0661-18127176 www.iosrjournals.org 72 | Page
II. Related Work:
Biometric frameworks have been observed to be helpful devices for individual ID and confirmation. A
biometric trademark is any physiological of behavioral characteristic of a man that can be utilized to recognize
that individual from other individuals. A couple key parts of a human physiological or behavioral attribute that
make for a solid biometric for acknowledgment are all-inclusiveness, peculiarity perpetual quality and
collectability. Era of preparing and test information sets from accumulated information is an imperative
undertaking in adding to a classifier with high era capacity. Reassembling strategies are utilized as a part of
factual investigation, are utilized for model choice by evaluating the characterization execution of classifiers.
Reassembling methods are utilized for evaluating insights, for example, the mean and the middle by arbitrarily
selecting information from the given information set, figuring measurements on that information and rehashing
above methodology ordinarily. Parody assaults comprise in submitting fake biometric qualities to biometric
frameworks, and this is a noteworthy risk in security. Multi-modular biometric frameworks are usually utilized
as a part of satire assaults. Multimodal biometric frameworks for individual character acknowledgment are
exceptionally valuable from recent years. It has been demonstrated that consolidating data originating from
various biometric attributes can conquer the cutoff points and the shortcomings natural in each individual
biometric, bringing about a higher precision. Interruption location frameworks investigate system movement to
avert and identify malevolent exercises like interruption endeavors, port outputs, and dissent of-administration
assaults. At the point when suspected noxious activity is identified, a caution is raised by the IDS and thusly
taken care of by the framework manager. Two primary sorts of IDSs exist: abuse identifiers and inconsistency
based ones. These guarantee that the characteristic is accessible from all individuals, is enough variable among
all individuals, does not change fundamentally after some time, and is sensibly ready to be measured. The issue
with any human quality that meets these criteria is in the execution, worthiness, and circumvention of the
biometric highlight. Execution is an issue coming about fundamentally from the blend of absence of variability
in the biometric attribute, commotion in the sensor information because of ecological components, and
heartiness of the coordinating calculation. Worthiness shows how willing the customer pool will be to utilize the
biometric identifier consistently. Circumvention is the likelihood of a non-customer (impostor) moving beyond
the framework utilizing misleading strategies. The way to making a safe multimodal biometric framework is in
how the data from the distinctive modalities is melded to settle on an official conclusion. There are two distinct
classifications of combination plans for various classifiers; standard based and directed based. Regulated
systems, then again, require preparing however can frequently give preferred results over the principle based
techniques. For instance, a combination system utilizing a bolster vector machine (SVM) could out-perform a
combination calculation utilizing the entirety principle. Bringing a quality measure into a combination
calculation is one strategy that has been utilized to help execution in multi biometric frameworks. On the off
chance that for occurrence, a more secure biometric of fantastic gives a low match score and a less secure
biometric gives a high match score, then there is a high probability of a satire assault. It is generally
comprehended that one of the qualities of a multimodal framework is in its capacity to oblige for loud sensor
information in an individual methodology. Conversely, a more secure calculation, to address the issue of a
parody assault on a fractional subset of the biometric modalities, must require satisfactory execution in all
modalities. This kind of calculation would perpetually discredit, to some degree, the commitment of a
multimodal framework to execution in the vicinity of boisterous sensor information. A multimodal framework
enhances the execution angle however expands the security just somewhat since it is still defenseless against
incomplete farce assaults. Upgraded combination routines which use ways to deal with enhance security will
again endure diminished execution when given loud information. The bolster vector machine (SVM) is an
activity strategy for information association and inversion rubrics after measurements, for occasion the SVM
can be reused to study polynomial, round establishment reason (RBF) then multi-layer observation (MLP)
classifiers SVMs stayed boss discretionary by Vapnik in the 1960s for association to build up a piece of
infiltrate in Investigate on owed to development in the systems in addition to rationality joined with deferments
to inversion and thickness guess. SVM's rose after arithmetical learning rationality the objective presence to
determine separate the risky of consideration denied of determining extra hazardous as a center stage. SVM's are
established on the physical risk minimization code, painstakingly joined with general inaction rationality. This
conviction joins volume switch to stop over-fitting and accordingly is aim finished reaction to the inclination
difference exchange off problem.
III. Spam Filtering Overview:
Over the past few years, spam filtering software has gained popularity due to its relative accuracy and
ease of deployment. With its roots in text classification research, spam filtering software seeks to answer the
question “Whether the message x is spam or not?” The means by which this question is addressed varies upon
the type of classification algorithm in place. While the categorization method differs between statistical filters,
their basic functionality is similar. The basic model is often known as the bag of words (multinomial) or
3. Classifying Evaluation Secure Patterns under Attacks
DOI: 10.9790/0661-18127176 www.iosrjournals.org 73 | Page
multivariate model. Essentially, a document is distilled into a set of features such as words, phrases, meta-data,
etc. This set of features can then be represented as a vector whose components are Boolean (multivariate) or real
values (multinomial). One should note that with this model the ordering of features is ignored. Classification
algorithm uses the feature vector as a basis upon which the document is judged. The usage of the feature vector
varies between classification methods. As the name implies, rule based methods classify documents based on
whether or not they meet a particular set of criteria. Machine learning algorithms are primarily driven by the
statistics (e.g. word frequency) that can be derived from the feature vectors. One of the widely used methods,
Bayesian classification, attempts to calculate the probability that a message is spam based upon previous feature
frequencies in spam and legitimate e-mail.
ALGORITHM
Construction of Training (TR) or Testing Set (TS) Generation
This algorithm is used to construction of training and testing of any desired size from the distribution
D. It follows a step by step procedure.This algorithm is based on classical resamplingtechniques such as cross-
validation and bootstrapping. Which consists of discriminating between legitimate (L) and malicious (M)
samples.
X denotes a d-dimensional feature vector
Properties to those exhibited by classical performance evaluation
Methods based on the same techniques. The step by step procedure as follows.
i. Consider there are „n‟ number of labeled sample.
ii. The class label „y‟ belongs to legitimate(L) or malicious(M) and „a‟ belongs to true (T) or false(F).
iii. Initially the sample set is empty.
iv. For the distribution I from 1 to n.
v. Take a sample y from probability distribution of L,M.
vi. The probability of a by y is equal to y then take the sample „a‟.
vii. Draw the sample „x‟ which is the combination of y and a, if analytically defined otherwise draw a
sample with replacement from D(y, a).
viii. Now the sample S have the distribution of x, y.
ix. End for
x. Return to the sample set.
IV. Spam And Online Svms:
The support vector machine (SVM)is a exercise procedure for knowledge organization and reversion
rubrics after statistics, for instance the SVM can be recycled to study polynomial, circular foundation purpose
(RBF) then multi-layer perception (MLP) classifiers SVMs remained chief optional by Vapnik in the 1960s for
organization beside smustlately develop an part of penetrate in investigate on owed to growths in the methods
plus philosophy joined with postponements to reversion and thicknessapproximation.SVM‟s ascendedafter
arithmeticalknowledgephilosophy the goal existence to resolve separate the problematic of attention deprived of
resolving additional problematic as a middle stage. SVM‟s are founded on the physical threat minimisation
code, carefully connected to regular inaction philosophy. This belief joins volume switch to stop over-fitting and
therefore is ain complete response to the bias-variance trade-off quandary. Binary key rudiments in the
application of SVM are the methods of precise software design and seed purposes. The limits are originated by
resolving a quadratic software design problematic with direct parity and disparity restraints; slightly than by
resolving a non-convex, unimpeded optimisation problem. The suppleness of seed purposes lets the SVM to
exploration a extensive diversity of theory places. The geometrical clarification of support vector classification
(SVC) is that the procedure pursuits for the best unravelling superficial, i.e. the hyper plane that is, in a
intelligence, intermediate after the binary courses. This best unscrambling per plane has several agree able
arithmetical possessions. SVC is drawn chief aimed at the linearly divisible circumstance. Kernel purposes are
then presented in instruction to concept non-linear choice exteriors. In conclusion, for noisy data, when whole
parting of the binary courses might not be desirable, relaxed variables are presented to permit for exercise faults.
V. Problem Statement
A systematic and unified dealing of this issue is thus needed to allow the trusted taking on of pattern
classifiers in adversarial environments, starting from the theoretical foundations up to novel design methods,
extending the classical design cycle.
Pattern classification systems base on classical theory and design methods do not take into account
adversarial settings, they exhibit vulnerabilities to some potential attacks, allowing adversaries to undermine
their usefulness.
4. Classifying Evaluation Secure Patterns under Attacks
DOI: 10.9790/0661-18127176 www.iosrjournals.org 74 | Page
Three main open issues can be identified: Analyzing the vulnerabilities of classification algorithms, and the
corresponding attacks.
Developing novel methods to assess classifier security against these attacks, which is not possible using
classical performance evaluation methods.
Developing novel design methods to promise classifier security in adversarial environments.
VI. Pattern Recognition:
Pattern recognition is a branch of machine learning that focuses on the recognition of patterns and
regularities in data, although it is in some cases considered to be nearly synonymous with machine learning.
Pattern recognition systems are in many cases trained from labelled "training" data (supervised learning), but
when no labelled data are available other algorithms can be used to discover previously unknown patterns
(unsupervised learning). The terms pattern recognition, machine learning, data mining and knowledge discovery
in databases (KDD) are hard to separate, as they largely overlap in their scope. Machine learning is the common
term for supervised learning methods and originates from artificial intelligence, whereas KDD and data mining
have a larger focus on unsupervised methods and stronger connection to business use. Pattern recognition has its
origins in engineering, and the term is popular in the context of computer vision: a leading computer vision
conference is named Conference on Computer Vision and Pattern Recognition. In pattern recognition, there may
be a higher interest to formalize, explain and visualize the pattern; whereas machine learning traditionally
focuses on maximizing the recognition rates. Yet, all of these domains have evolved substantially from their
roots in artificial intelligence, engineering and statistics; and have become increasingly similar by integrating
developments and ideas from each other. In machine learning, pattern recognition is the assignment of a label to
a given input value. In statistics, discriminate analysis was introduced for this same purpose in 1936. An
example of pattern recognition is classification, which attempts to assign each input value to one of a given set
of classes (for example, determine whether a given email is "spam" or "non-spam"). However, pattern
recognition is a more general problem that encompasses other types of output as well. Other examples are
regression, which assigns a real-valued output to each input; sequence labelling, which assigns a class to each
member of a sequence of values (for example, part of speech tagging, which assigns a part of speech to each
word in an input sentence); and parsing, which assigns a parse tree to an input sentence, describing the syntactic
structure of the sentence.
VII. Contributions, Limitations And Open Issues
In this paper we focused on empirical security evaluation of pattern classifiers that have to be deployed
in adversarial environments, and proposed how to revise the classical performance evaluation design step, which
is not suitable for this purpose. Our main contribution is a framework for empirical security evaluation that
formalizes and generalizes ideas from previous work, and can be applied to different classifiers, learning
algorithms, and classification tasks. It is grounded on a formal model of the adversary that enables security
evaluation; and can accommodate application-specific techniques for attack simulation. This is a clear
advancement with respect to previous work, since without a general framework most of the proposed techniques
(often tailored to a given classifier model, attack, and application) could not be directly applied to other
problems. An intrinsic limitation of our work is that security evaluation is carried out empirically, and it is thus
data dependent; on the other hand, model-driven analyses require a full analytical model of the problem and of
the adversary‟s behaviour that may be very difficult to develop for real-world applications. Another intrinsic
limitation is due to fact that our method is not application-specific, and, therefore, provides only high-level
guidelines for simulating attacks. Indeed, detailed guidelines require one to take into account application
specific constraints and adversary models. Our future work will be devoted to develop techniques for simulating
attacks for different applications. Although the design of secure classifiers is a distinct problem than security
evaluation, our framework could be also exploited to this end.
VIII. Experimental Results
Table1.0classificationofpatternclassifierpotential
Attacks pattern classifier Potential
0.0992 2 6 10
0.0995 5 5 20
0.0996 5 5 30
0.0997 7 8 50
1 5 10 60
5. Classifying Evaluation Secure Patterns under Attacks
DOI: 10.9790/0661-18127176 www.iosrjournals.org 75 | Page
Fig2. Function of classifier values
Each model decreases that is it drops to zero for values between 3and 5 (depending on the classifier). This
means that all testing spam emails gotmis-classified as legitimate, after adding or obfuscating from3 to
5words.The pattern and attack classifiers perform very similarly when they are not under attack, regardless of
the feature set size; therefore, according to the viewpoint of classical performance evaluation, the designer could
choose any of the eight models. However, security evaluation
IX. Conclusion:
In this paper we focused on empirical security evaluation of pattern classifiers that have to be deployed
in adversarial environments, and proposed how to revise the classical performance evaluation design step, which
is not suitable forth is purpose. Our main contribution is a framework for empirical security evaluation that
formalizes and generalizes ideas from previous work, and can be applied to different classifiers, learning
algorithms, and classification tasks. It is grounded on a formal model of the adversary, and on a model of data
distribution that can represent all the attacks considered in previous work; provides a systematic method for the
generation of training and testing sets that enables security evaluation; and can accommodate application-
specific techniques for attack simulation. An intrinsic limitation of our work is that security evaluation is carried
out empirically, and it is thus data dependent; on the other hand, model-driven analyses require a full analytical
model of the problem and of the adversary‟s behaviour that may be very difficult to develop for real-world
applications. Another intrinsic limitation is due to fact that our method is not application-specific, and, therefore,
provides only high-level guidelines for simulating attacks. Indeed, detailed guidelines require one to take into
account application-specific constraints and adversary models.
References:
[1]. R.N. Rodrigues, L.L. Ling, and V. Govindaraju, “Robustness ofMultimodal Biometric Fusion Methods against Spoof Attacks,”J.
Visual Languages and Computing, vol. 20, no. 3, pp. 169-179, 2009.
[2]. P. Johnson, B. Tan, and S. Schuckers, “Multimodal Fusion Vulnerabilityto Non-Zero Effort (Spoof) Imposters,” Proc. IEEE
Int‟lWorkshop Information Forensics and Security, pp. 1-5, 2010.
[3]. P. Fogla, M. Sharif, R. Perdisci, O. Kolesnikov, and W. Lee,“Polymorphic Blending Attacks,” Proc. 15th Conf. USENIX
SecuritySymp., 2006.
[4]. G.L. Wittel and S.F. Wu, “On Attacking Statistical Spam Filters,”Proc.First Conf. Email and Anti-Spam, 2004.
[5]. D. Lowd and C. Meek, “Good Word Attacks on Statistical SpamFilters,” Proc. Second Conf. Email and Anti-Spam, 2005.
[6]. A. Kolcz and C.H. Teo, “Feature Weighting for Improved ClassifierRobustness,” Proc. Sixth Conf. Email and Anti-Spam, 2009.
[7]. D.B. Skillicorn, “Adversarial Knowledge Discovery,” IEEE IntelligentSystems, vol. 24, no. 6, Nov./Dec. 2009.
[8]. D. Fetterly, “Adversarial Information Retrieval: The Manipulationof Web Content,” ACM Computing Rev., 2007.
[9]. R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification.Wiley-Interscience Publication, 2000.
[10]. N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma,“Adversarial Classification,” Proc. 10th ACM SIGKDD Int‟l
Conf.Knowledge Discovery and Data Mining, pp. 99-108, 2004.
[11]. M. Barreno, B. Nelson, R. Sears, A.D. Joseph, and J.D. Tygar, “Can Machine Learning be Secure?” Proc. ACM Symp.
Information, Computer and Comm. Security (ASIACCS), pp. 16-25, 2006.
[12]. A.A. C_ardenas and J.S. Baras, “Evaluation of Classifiers: Practical Considerations for Security Applications,” Proc. AAAI
Workshop Evaluation Methods for Machine Learning, 2006.
[13]. P. Laskov and R. Lippmann, “Machine Learning in Adversarial Environments,” Machine Learning, vol. 81, pp. 115-119, 2010.
[14]. L. Huang, A.D. Joseph, B. Nelson, B. Rubinstein, and J.D. Tygar, “Adversarial Machine Learning,” Proc. Fourth ACM Workshop
Artificial Intelligence and Security, pp. 43-57, 2011.
[15]. M. Barreno, B. Nelson, A. Joseph, and J. Tygar, “The Security of Machine Learning,” Machine Learning, vol. 81, pp. 121-148,
2010.
[16]. D. Lowd and C. Meek, “Adversarial Learning,” Proc. 11th ACM SIGKDD Int‟l Conf. Knowledge Discovery and Data Mining,
pp. 641- 647, 2005.
70
60
50
40
30
20
10
0
Attacks
patternc
lassifierP
otential
1 2 3 4 5
6. Classifying Evaluation Secure Patterns under Attacks
DOI: 10.9790/0661-18127176 www.iosrjournals.org 76 | Page
AUTHORS:
MITNASALA MAMATHA isPursuingM.Tech (Computer Science and Engineering), in
QIS Institute of Technology, PrakasamDist, Andhra Pradesh,India.
HUSSAIN SYED currently working as Asst.Professor in QIS Institute of Technology, in
the Department of Computer Science and Engineering, Ongole, Prakasam Dist, Andhra
Pradesh, India