Fuzzing is one of the most commonly used methods to detect software vulnerabilities, a major cause of information security incidents. Although it has advantages of simple design and low error report, its efficiency is usually poor. In this paper we present a smart fuzzing approach for integer overflow detection and a tool, SwordFuzzer, which implements this approach. Unlike standard fuzzing techniques, which randomly change parts of the input file with no information about the underlying syntactic structure of the file, SwordFuzzer uses online dynamic taint analysis to identify which bytes in the input file are used in security sensitive operations and then focuses on mutating such bytes. Thus, the generated inputs are more likely to trigger potential vulnerabilities. We evaluated SwordFuzzer with an example program and a number of real-world applications. The experimental results show that SwordFuzzer can accurately locate the key bytes of the input file and dramatically improve the effectiveness of fuzzing in detecting real-world vulnerabilities
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Taint analysis is the trending approach of analysing software for security purposes. By using the taint analysis technique, tainted tags are added to the data entering from the sensitive sources into the applications, then the propagations of the tainted data are monitored carefully. Taint analysis can be done in two ways including static taint analysis where analysis is conducted without executing the program, and dynamic taint analysis where the tainted data is monitored during the program execution. This paper reviews the taint analysis technique, with a focus on dynamic taint analysis. In addition, some of the existing taint analysis tools and their application areas are reviewed. In the end, the paper summarises the defects associated with each of the tools and presents some of them.
Application of Attack Graphs in Intrusion Detection Systems: An ImplementationCSCJournals
Internet attacks are continuously increasing in the last years, in terms of scale and complexity, challenging the existing defense solutions with new complications and making them almost ineffective against multi-stage attacks, in particular the intrusion detection systems which fail to identify such complex attacks. Attack graph is a modeling technique used to visualize the different steps an attacker might select to achieve his end game, based on existing vulnerabilities and weaknesses in the system. This paper studies the application of attack graphs in intrusion detection and prevention systems (IDS/IPS) in order to better identify complex attacks based on predefined models, configurations, and alerts. As a “proof of concept”, a tool is developed which interfaces with the well-known SNORT [1] intrusion detection system and matches the alerts with an attack graph generated using the NESSUS [2] vulnerability scanner (maintained up-to-date using the National Vulnerability Database (NVD) [3]) and the MULVAL [4] attack graph generation library. The tool allows to keep track with the attacker activities along the different stages of the attack graph.
An automated approach to fix buffer overflows IJECEIAES
Buffer overflows are one of the most common software vulnerabilities that occur when more data is inserted into a buffer than it can hold. Various manual and automated techniques for detecting and fixing specific types of buffer overflow vulnerability have been proposed, but the solution to fix Unicode buffer overflow has not been proposed yet. Public security vulnerability repository e.g., Common Weakness Enumeration (CWE) holds useful articles about software security vulnerabilities. Mitigation strategies listed in CWE may be useful for fixing the specified software security vulnerabilities. This research contributes by developing a prototype that automatically fixes different types of buffer overflows by using the strategies suggested in CWE articles and existing research. A static analysis tool has been used to evaluate the performance of the developed prototype tools. The results suggest that the proposed approach can automatically fix buffer overflows without inducing errors.
Obfuscated computer virus detection using machine learning algorithmjournalBEEI
Nowadays, computer virus attacks are getting very advanced. New obfuscated computer virus created by computer virus writers will generate a new shape of computer virus automatically for every single iteration and download. This constantly evolving computer virus has caused significant threat to information security of computer users, organizations and even government. However, signature based detection technique which is used by the conventional anti-computer virus software in the market fails to identify it as signatures are unavailable. This research proposed an alternative approach to the traditional signature based detection method and investigated the use of machine learning technique for obfuscated computer virus detection. In this work, text strings are used and have been extracted from virus program codes as the features to generate a suitable classifier model that can correctly classify obfuscated virus files. Text string feature is used as it is informative and potentially only use small amount of memory space. Results show that unknown files can be correctly classified with 99.5% accuracy using SMO classifier model. Thus, it is believed that current computer virus defense can be strengthening through machine learning approach.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Taint analysis is the trending approach of analysing software for security purposes. By using the taint analysis technique, tainted tags are added to the data entering from the sensitive sources into the applications, then the propagations of the tainted data are monitored carefully. Taint analysis can be done in two ways including static taint analysis where analysis is conducted without executing the program, and dynamic taint analysis where the tainted data is monitored during the program execution. This paper reviews the taint analysis technique, with a focus on dynamic taint analysis. In addition, some of the existing taint analysis tools and their application areas are reviewed. In the end, the paper summarises the defects associated with each of the tools and presents some of them.
Application of Attack Graphs in Intrusion Detection Systems: An ImplementationCSCJournals
Internet attacks are continuously increasing in the last years, in terms of scale and complexity, challenging the existing defense solutions with new complications and making them almost ineffective against multi-stage attacks, in particular the intrusion detection systems which fail to identify such complex attacks. Attack graph is a modeling technique used to visualize the different steps an attacker might select to achieve his end game, based on existing vulnerabilities and weaknesses in the system. This paper studies the application of attack graphs in intrusion detection and prevention systems (IDS/IPS) in order to better identify complex attacks based on predefined models, configurations, and alerts. As a “proof of concept”, a tool is developed which interfaces with the well-known SNORT [1] intrusion detection system and matches the alerts with an attack graph generated using the NESSUS [2] vulnerability scanner (maintained up-to-date using the National Vulnerability Database (NVD) [3]) and the MULVAL [4] attack graph generation library. The tool allows to keep track with the attacker activities along the different stages of the attack graph.
An automated approach to fix buffer overflows IJECEIAES
Buffer overflows are one of the most common software vulnerabilities that occur when more data is inserted into a buffer than it can hold. Various manual and automated techniques for detecting and fixing specific types of buffer overflow vulnerability have been proposed, but the solution to fix Unicode buffer overflow has not been proposed yet. Public security vulnerability repository e.g., Common Weakness Enumeration (CWE) holds useful articles about software security vulnerabilities. Mitigation strategies listed in CWE may be useful for fixing the specified software security vulnerabilities. This research contributes by developing a prototype that automatically fixes different types of buffer overflows by using the strategies suggested in CWE articles and existing research. A static analysis tool has been used to evaluate the performance of the developed prototype tools. The results suggest that the proposed approach can automatically fix buffer overflows without inducing errors.
Obfuscated computer virus detection using machine learning algorithmjournalBEEI
Nowadays, computer virus attacks are getting very advanced. New obfuscated computer virus created by computer virus writers will generate a new shape of computer virus automatically for every single iteration and download. This constantly evolving computer virus has caused significant threat to information security of computer users, organizations and even government. However, signature based detection technique which is used by the conventional anti-computer virus software in the market fails to identify it as signatures are unavailable. This research proposed an alternative approach to the traditional signature based detection method and investigated the use of machine learning technique for obfuscated computer virus detection. In this work, text strings are used and have been extracted from virus program codes as the features to generate a suitable classifier model that can correctly classify obfuscated virus files. Text string feature is used as it is informative and potentially only use small amount of memory space. Results show that unknown files can be correctly classified with 99.5% accuracy using SMO classifier model. Thus, it is believed that current computer virus defense can be strengthening through machine learning approach.
A FRAMEWORK FOR ANALYSIS AND COMPARISON OF DYNAMIC MALWARE ANALYSIS TOOLSIJNSA Journal
Malware writers have employed various obfuscation and polymorphism techniques to thwart static analysis
approaches and bypassing antivirus tools. Dynamic analysis techniques, however, have essentially
overcome these deceits by observing the actual behaviour of the code execution. In this regard, various
methods, techniques and tools have been proposed. However, because of the diverse concepts and
strategies used in the implementation of these methods and tools, security researchers and malware
analysts find it difficult to select the required optimum tool to investigate the behaviour of a malware and to
contain the associated risk for their study. Focusing on two dynamic analysis techniques: Function Call
monitoring and Information Flow Tracking, this paper presents a comparison framework for dynamic
malware analysis tools. The framework will assist the researchers and analysts to recognize the tool’s
implementation strategy, analysis approach, system-wide analysis support and its overall handling of
binaries, helping them to select a suitable and effective one for their study and analysis.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
AN IMPROVED METHOD TO DETECT INTRUSION USING MACHINE LEARNING ALGORITHMSieijjournal
An intrusion detection system detects various malicious behaviors and abnormal activities that might harm
security and trust of computer system. IDS operate either on host or network level via utilizing anomaly
detection or misuse detection. Main problem is to correctly detect intruder attack against computer
network. The key point of successful detection of intrusion is choice of proper features. To resolve the
problems of IDS scheme this research work propose “an improved method to detect intrusion using
machine learning algorithms”. In our paper we use KDDCUP 99 dataset to analyze efficiency of intrusion
detection with different machine learning algorithms like Bayes, NaiveBayes, J48, J48Graft and Random
forest. To identify network based IDS with KDDCUP 99 dataset, experimental results shows that the three
algorithms J48, J48Graft and Random forest gives much better results than other machine learning
algorithms. We use WEKA to check the accuracy of classified dataset via our proposed method. We have
considered all the parameter for computation of result i.e. precision, recall, F – measure and ROC.
Information Systems and Networks are subjected to electronic attacks. When
network attacks hit, organizations are thrown into crisis mode. From the IT department to
call centers, to the board room and beyond, all are fraught with danger until the situation is
under control. Traditional methods which are used to overcome these threats (e.g. firewall,
antivirus software, password protection etc.) do not provide complete security to the system.
This encourages the researchers to develop an Intrusion Detection System which is capable
of detecting and responding to such events. This review paper presents a comprehensive
study of Genetic Algorithm (GA) based Intrusion Detection System (IDS). It provides a
brief overview of rule-based IDS, elaborates the implementation issues of Genetic Algorithm
and also presents a comparative analysis of existing studies.
PRACTICAL APPROACH FOR SECURING WINDOWS ENVIRONMENT: ATTACK VECTORS AND COUNT...IJNSA Journal
Today, with the advancement of information technology, companies need to use many technologies, platforms, systems and applications to effectively maintain their daily operations. This technology dependence has created a serious complexity in the business network which increases the attack surface and attracts cyber criminal’s attention. As a result, the number of cyber-attacks targeting corporate environment is dramatically increased. To identify security holes in a network, penetration tests are performed by internal sources (employees) and external sources (outsource companies or third parties). Microsoft domain penetration testing,is one of the most important scopes of penetration testing, which aims to expose the weaknesses in Microsoft domain environment. If the domain environment is not structured securely, it can be abused by attackers and causes serious damage to the organization. In this study, we present a penetration methodology for Windows domain environment called MSDEPTM providing key metrics for Microsoft domain penetration testing. More specifically, the fundamental steps of the attack vectors from the hacker point of view, root causes of these attacks, and countermeasures against the attacks are discussed.
EFFECTIVENESS AND WEAKNESS OF QUANTIFIED/AUTOMATED ANOMALY BASED IDSIJNSA Journal
We shall discuss new problems of quantification/automation of anomaly-based Intrusion Detection System(IDS). We shall analyze effectiveness and weakness using our proposal method as an example, and derive new attack scenario. Development of anomaly-based IDS is necessary for correspondence to a high network attack, however, we shall show that it makes new different problems at the same time. In this paper, we shall discuss some attack scenario which makes invalidate our detection. As the result, we conclude that it is difficult to prevent such attacks technically, and security requirements for operation side become serious
Basic survey on malware analysis, tools and techniquesijcsa
The term malware stands for malicious software. It is a program installed on a system without the
knowledge of owner of the system. It is basically installed by the third party with the intention to steal some
private data from the system or simply just to play pranks. This in turn threatens the computer’s security,
wherein computer are used by one’s in day-to-day life as to deal with various necessities like education,
communication, hospitals, banking, entertainment etc. Different traditional techniques are used to detect
and defend these malwares like Antivirus Scanner (AVS), firewalls, etc. But today malware writers are one
step forward towards then Malware detectors. Day-by-day they write new malwares, which become a great
challenge for malware detectors. This paper focuses on basis study of malwares and various detection
techniques which can be used to detect malwares.
COMPARISON OF MALWARE CLASSIFICATION METHODS USING CONVOLUTIONAL NEURAL NETWO...IJNSA Journal
Malicious software is constantly being developed and improved, so detection and classification of malwareis an ever-evolving problem. Since traditional malware detection techniques fail to detect new/unknown malware, machine learning algorithms have been used to overcome this disadvantage. We present a Convolutional Neural Network (CNN) for malware type classification based on the API (Application Program Interface) calls. This research uses a database of 7107 instances of API call streams and 8 different malware types:Adware, Backdoor, Downloader, Dropper, Spyware, Trojan, Virus,Worm. We used a 1-Dimensional CNN by mapping API calls as categorical and term frequency-inverse document frequency (TF-IDF) vectors and compared the results to other classification techniques.The proposed 1-D CNN outperformed other classification techniques with 91% overall accuracy for both categorical and TF-IDF vectors.
Integrated Feature Extraction Approach Towards Detection of Polymorphic Malwa...CSCJournals
Some malware are sophisticated with polymorphic techniques such as self-mutation and emulation based analysis evasion. Most anti-malware techniques are overwhelmed by the polymorphic malware threats that self-mutate with different variants at every attack. This research aims to contribute to the detection of malicious codes, especially polymorphic malware by utilizing advanced static and advanced dynamic analyses for extraction of more informative key features of a malware through code analysis, memory analysis and behavioral analysis. Correlation based feature selection algorithm will be used to transform features; i.e. filtering and selecting optimal and relevant features. A machine learning technique called K-Nearest Neighbor (K-NN) will be used for classification and detection of polymorphic malware. Evaluation of results will be based on the following measurement metrics-True Positive Rate (TPR), False Positive Rate (FPR) and the overall detection accuracy of experiments.
Improvement of Software Maintenance and Reliability using Data Mining Techniquesijdmtaiir
Software is ubiquitous in our daily life. It brings us
great convenience and a big headache about software reliability
as well: Software is never bug-free, and software bugs keep
incurring monetary loss of even catastrophes. In the pursuit of
better reliability, software engineering researchers found that
huge amount of data in various forms can be collected from
software systems, and these data, when properly analyzed, can
help improve software reliability. Unfortunately, the huge
volume of complex data renders the analysis of simple
techniques incompetent; consequently, studies have been
resorting to data mining for more effective analysis. In the past
few years, we have witnessed many studies on mining for
software reliability reported in data mining as well as software
engineering forums. These studies either develop new or apply
existing data mining techniques to tackle reliability problems
from different angles. In order to keep data mining researchers
abreast of the latest development in this growing research area,
we propose this paper on data mining for software reliability.
In this paper, we will present a comprehensive overview of this
area, examine representative studies, and lay out challenges to
data mining researchers
Malware Detection Module using Machine Learning Algorithms to Assist in Centr...IJNSA Journal
Malicious software is abundant in a world of innumerable computer users, who are constantly faced withthese threats from various sources like the internet, local networks and portable drives. Malware is potentially low to high risk and can cause systems to function incorrectly, steal data and even crash. Malware may be executable or system library files in the form of viruses, worms, Trojans, all aimed at breaching the security of the system and compromising user privacy. Typically, anti-virus software is based on a signature definition system which keeps updating from the internet and thus keeping track of known viruses. While this may be sufficient for home-users, a security risk from a new virus could threaten an entire enterprise network. This paper proposes a new and more sophisticated antivirus engine that can not only scan files, but also build knowledge and detect files as potential viruses. This is done by extracting system API calls made by various normal and harmful executable, and using machine learning algorithms to classify and hence, rank files on a scale of security risk. While such a system is processor heavy, it is very effective when used centrally to protect an enterprise network which maybe more prone to such threats.
Network Threat Characterization in Multiple Intrusion Perspectives using Data...IJNSA Journal
For effective security incidence response on the network, a reputable approach must be in place at both protected and unprotected region of the network. This is because compromise in the demilitarized zone could be precursor to threat inside the network. The improved complexity of attacks in present times and vulnerability of system are motivations for this work. Past and present approaches to intrusion detection and prevention have neglected victim and attacker properties despite the fact that for intrusion to occur, an overt act by an attacker and a manifestation, observable by the intended victim, which results from that act are required. Therefore, this paper presents a threat characterization model for attacks from the victim and the attacker perspective of intrusion using data mining technique. The data mining technique combines Frequent Temporal Sequence Association Mining and Fuzzy Logic. Apriori Association Mining algorithm was used to mine temporal rule patterns from alert sequences while Fuzzy Control System was used to rate exploits. The results of the experiment show that accurate threat characterization in multiple intrusion perspectives could be actualized using Fuzzy Association Mining. Also, the results proved that sequence of exploits could be used to rate threat and are motivated by victim properties and attacker objectives.
A source code security audit is a powerful methodology for locating and removing security vulnerabilities.
An audit can be used to (1) pass potentially prioritized list of vulnerabilities to developers (2) exploit
vulnerabilities or (3) provide proof-of-concepts for potential vulnerabilities. The security audit research
currently remains disjoint with minor discussion of methodologies utilized in the field. This paper
assembles a broad array of literature to promote standardizing source code security audits techniques. It,
then, explores a case study using the aforementioned techniques.
The case study analyzes the security for a stable version of the Apache Traffic Server (ATS). The study
takes a white to gray hat point of view as it reports vulnerabilities located by two popular proprietary tools,
examines and connects potential vulnerabilities with a standard community-driven taxonomy, and
describes consequences for exploiting the vulnerabilities. A review of other security-driven case studies
concludes this research.
A FRAMEWORK FOR ANALYSIS AND COMPARISON OF DYNAMIC MALWARE ANALYSIS TOOLSIJNSA Journal
Malware writers have employed various obfuscation and polymorphism techniques to thwart static analysis
approaches and bypassing antivirus tools. Dynamic analysis techniques, however, have essentially
overcome these deceits by observing the actual behaviour of the code execution. In this regard, various
methods, techniques and tools have been proposed. However, because of the diverse concepts and
strategies used in the implementation of these methods and tools, security researchers and malware
analysts find it difficult to select the required optimum tool to investigate the behaviour of a malware and to
contain the associated risk for their study. Focusing on two dynamic analysis techniques: Function Call
monitoring and Information Flow Tracking, this paper presents a comparison framework for dynamic
malware analysis tools. The framework will assist the researchers and analysts to recognize the tool’s
implementation strategy, analysis approach, system-wide analysis support and its overall handling of
binaries, helping them to select a suitable and effective one for their study and analysis.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
AN IMPROVED METHOD TO DETECT INTRUSION USING MACHINE LEARNING ALGORITHMSieijjournal
An intrusion detection system detects various malicious behaviors and abnormal activities that might harm
security and trust of computer system. IDS operate either on host or network level via utilizing anomaly
detection or misuse detection. Main problem is to correctly detect intruder attack against computer
network. The key point of successful detection of intrusion is choice of proper features. To resolve the
problems of IDS scheme this research work propose “an improved method to detect intrusion using
machine learning algorithms”. In our paper we use KDDCUP 99 dataset to analyze efficiency of intrusion
detection with different machine learning algorithms like Bayes, NaiveBayes, J48, J48Graft and Random
forest. To identify network based IDS with KDDCUP 99 dataset, experimental results shows that the three
algorithms J48, J48Graft and Random forest gives much better results than other machine learning
algorithms. We use WEKA to check the accuracy of classified dataset via our proposed method. We have
considered all the parameter for computation of result i.e. precision, recall, F – measure and ROC.
Information Systems and Networks are subjected to electronic attacks. When
network attacks hit, organizations are thrown into crisis mode. From the IT department to
call centers, to the board room and beyond, all are fraught with danger until the situation is
under control. Traditional methods which are used to overcome these threats (e.g. firewall,
antivirus software, password protection etc.) do not provide complete security to the system.
This encourages the researchers to develop an Intrusion Detection System which is capable
of detecting and responding to such events. This review paper presents a comprehensive
study of Genetic Algorithm (GA) based Intrusion Detection System (IDS). It provides a
brief overview of rule-based IDS, elaborates the implementation issues of Genetic Algorithm
and also presents a comparative analysis of existing studies.
PRACTICAL APPROACH FOR SECURING WINDOWS ENVIRONMENT: ATTACK VECTORS AND COUNT...IJNSA Journal
Today, with the advancement of information technology, companies need to use many technologies, platforms, systems and applications to effectively maintain their daily operations. This technology dependence has created a serious complexity in the business network which increases the attack surface and attracts cyber criminal’s attention. As a result, the number of cyber-attacks targeting corporate environment is dramatically increased. To identify security holes in a network, penetration tests are performed by internal sources (employees) and external sources (outsource companies or third parties). Microsoft domain penetration testing,is one of the most important scopes of penetration testing, which aims to expose the weaknesses in Microsoft domain environment. If the domain environment is not structured securely, it can be abused by attackers and causes serious damage to the organization. In this study, we present a penetration methodology for Windows domain environment called MSDEPTM providing key metrics for Microsoft domain penetration testing. More specifically, the fundamental steps of the attack vectors from the hacker point of view, root causes of these attacks, and countermeasures against the attacks are discussed.
EFFECTIVENESS AND WEAKNESS OF QUANTIFIED/AUTOMATED ANOMALY BASED IDSIJNSA Journal
We shall discuss new problems of quantification/automation of anomaly-based Intrusion Detection System(IDS). We shall analyze effectiveness and weakness using our proposal method as an example, and derive new attack scenario. Development of anomaly-based IDS is necessary for correspondence to a high network attack, however, we shall show that it makes new different problems at the same time. In this paper, we shall discuss some attack scenario which makes invalidate our detection. As the result, we conclude that it is difficult to prevent such attacks technically, and security requirements for operation side become serious
Basic survey on malware analysis, tools and techniquesijcsa
The term malware stands for malicious software. It is a program installed on a system without the
knowledge of owner of the system. It is basically installed by the third party with the intention to steal some
private data from the system or simply just to play pranks. This in turn threatens the computer’s security,
wherein computer are used by one’s in day-to-day life as to deal with various necessities like education,
communication, hospitals, banking, entertainment etc. Different traditional techniques are used to detect
and defend these malwares like Antivirus Scanner (AVS), firewalls, etc. But today malware writers are one
step forward towards then Malware detectors. Day-by-day they write new malwares, which become a great
challenge for malware detectors. This paper focuses on basis study of malwares and various detection
techniques which can be used to detect malwares.
COMPARISON OF MALWARE CLASSIFICATION METHODS USING CONVOLUTIONAL NEURAL NETWO...IJNSA Journal
Malicious software is constantly being developed and improved, so detection and classification of malwareis an ever-evolving problem. Since traditional malware detection techniques fail to detect new/unknown malware, machine learning algorithms have been used to overcome this disadvantage. We present a Convolutional Neural Network (CNN) for malware type classification based on the API (Application Program Interface) calls. This research uses a database of 7107 instances of API call streams and 8 different malware types:Adware, Backdoor, Downloader, Dropper, Spyware, Trojan, Virus,Worm. We used a 1-Dimensional CNN by mapping API calls as categorical and term frequency-inverse document frequency (TF-IDF) vectors and compared the results to other classification techniques.The proposed 1-D CNN outperformed other classification techniques with 91% overall accuracy for both categorical and TF-IDF vectors.
Integrated Feature Extraction Approach Towards Detection of Polymorphic Malwa...CSCJournals
Some malware are sophisticated with polymorphic techniques such as self-mutation and emulation based analysis evasion. Most anti-malware techniques are overwhelmed by the polymorphic malware threats that self-mutate with different variants at every attack. This research aims to contribute to the detection of malicious codes, especially polymorphic malware by utilizing advanced static and advanced dynamic analyses for extraction of more informative key features of a malware through code analysis, memory analysis and behavioral analysis. Correlation based feature selection algorithm will be used to transform features; i.e. filtering and selecting optimal and relevant features. A machine learning technique called K-Nearest Neighbor (K-NN) will be used for classification and detection of polymorphic malware. Evaluation of results will be based on the following measurement metrics-True Positive Rate (TPR), False Positive Rate (FPR) and the overall detection accuracy of experiments.
Improvement of Software Maintenance and Reliability using Data Mining Techniquesijdmtaiir
Software is ubiquitous in our daily life. It brings us
great convenience and a big headache about software reliability
as well: Software is never bug-free, and software bugs keep
incurring monetary loss of even catastrophes. In the pursuit of
better reliability, software engineering researchers found that
huge amount of data in various forms can be collected from
software systems, and these data, when properly analyzed, can
help improve software reliability. Unfortunately, the huge
volume of complex data renders the analysis of simple
techniques incompetent; consequently, studies have been
resorting to data mining for more effective analysis. In the past
few years, we have witnessed many studies on mining for
software reliability reported in data mining as well as software
engineering forums. These studies either develop new or apply
existing data mining techniques to tackle reliability problems
from different angles. In order to keep data mining researchers
abreast of the latest development in this growing research area,
we propose this paper on data mining for software reliability.
In this paper, we will present a comprehensive overview of this
area, examine representative studies, and lay out challenges to
data mining researchers
Malware Detection Module using Machine Learning Algorithms to Assist in Centr...IJNSA Journal
Malicious software is abundant in a world of innumerable computer users, who are constantly faced withthese threats from various sources like the internet, local networks and portable drives. Malware is potentially low to high risk and can cause systems to function incorrectly, steal data and even crash. Malware may be executable or system library files in the form of viruses, worms, Trojans, all aimed at breaching the security of the system and compromising user privacy. Typically, anti-virus software is based on a signature definition system which keeps updating from the internet and thus keeping track of known viruses. While this may be sufficient for home-users, a security risk from a new virus could threaten an entire enterprise network. This paper proposes a new and more sophisticated antivirus engine that can not only scan files, but also build knowledge and detect files as potential viruses. This is done by extracting system API calls made by various normal and harmful executable, and using machine learning algorithms to classify and hence, rank files on a scale of security risk. While such a system is processor heavy, it is very effective when used centrally to protect an enterprise network which maybe more prone to such threats.
Network Threat Characterization in Multiple Intrusion Perspectives using Data...IJNSA Journal
For effective security incidence response on the network, a reputable approach must be in place at both protected and unprotected region of the network. This is because compromise in the demilitarized zone could be precursor to threat inside the network. The improved complexity of attacks in present times and vulnerability of system are motivations for this work. Past and present approaches to intrusion detection and prevention have neglected victim and attacker properties despite the fact that for intrusion to occur, an overt act by an attacker and a manifestation, observable by the intended victim, which results from that act are required. Therefore, this paper presents a threat characterization model for attacks from the victim and the attacker perspective of intrusion using data mining technique. The data mining technique combines Frequent Temporal Sequence Association Mining and Fuzzy Logic. Apriori Association Mining algorithm was used to mine temporal rule patterns from alert sequences while Fuzzy Control System was used to rate exploits. The results of the experiment show that accurate threat characterization in multiple intrusion perspectives could be actualized using Fuzzy Association Mining. Also, the results proved that sequence of exploits could be used to rate threat and are motivated by victim properties and attacker objectives.
A source code security audit is a powerful methodology for locating and removing security vulnerabilities.
An audit can be used to (1) pass potentially prioritized list of vulnerabilities to developers (2) exploit
vulnerabilities or (3) provide proof-of-concepts for potential vulnerabilities. The security audit research
currently remains disjoint with minor discussion of methodologies utilized in the field. This paper
assembles a broad array of literature to promote standardizing source code security audits techniques. It,
then, explores a case study using the aforementioned techniques.
The case study analyzes the security for a stable version of the Apache Traffic Server (ATS). The study
takes a white to gray hat point of view as it reports vulnerabilities located by two popular proprietary tools,
examines and connects potential vulnerabilities with a standard community-driven taxonomy, and
describes consequences for exploiting the vulnerabilities. A review of other security-driven case studies
concludes this research.
Protecting Enterprise - An examination of bugs, major vulnerabilities and exp...ESET Middle East
This white paper focuses on the dramatic growth in the number and severity of software vulnerabilities, and discusses how multilayered endpoint security is needed to mitigate the threats they pose.
A Security Analysis Framework Powered by an Expert SystemCSCJournals
Today\'s IT systems are facing a major challenge in confronting the fast rate of emerging security threats. Although many security tools are being employed within organizations in order to standup to these threats, the information revealed is very inferior in providing a rich understanding to the consequences of the discovered vulnerabilities. We believe expert systems can play an important role in capturing any security expertise from various sources in order to provide the informative deductions we are looking for from the supplied inputs. Throughout this research effort, we have built the Open Security Knowledge Engineered (OpenSKE) framework (http://code.google.com/p/openske), which is a security analysis framework built around an expert system in order to reason over the security information collected from external sources. Our implementation has been published online in order to facilitate and encourage online collaboration to increase the practical research within the field of security analysis.
SOURCE CODE ANALYSIS TO REMOVE SECURITY VULNERABILITIES IN JAVA SOCKET PROGRA...IJNSA Journal
This paper presents the source code analysis of a file reader server socket program (connection-oriented sockets) developed in Java, to illustrate the identification, impact analysis and solutions to remove five important software security vulnerabilities, which if left unattended could severely impact the server running the software and also the network hosting the server. The five vulnerabilities we study in this paper are: (1) Resource Injection, (2) Path Manipulation, (3) System Information Leak, (4) Denial of Service and (5) Unreleased Resource vulnerabilities. We analyze the reason why each of these vulnerabilities occur in the file reader server socket program, discuss the impact of leaving them unattended in the program, and propose solutions to remove each of these vulnerabilities from the program. We also analyze any potential performance tradeoffs (such as increase in code size and loss of features) that could arise while incorporating the proposed solutions on the server program. The proposed solutions are very generic in nature, and can be suitably modified to correct any suchvulnerabilities in software developed in any other programming language. We use the Fortify Source Code Analyzer to conduct the source code analysis of the file reader server program, implemented on a Windows XP virtual machine with the standard J2SE v.7 development kit.
Secure intrusion detection and countermeasure selection in virtual system usi...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Obfuscated computer virus detection using machine learning algorithmjournalBEEI
Nowadays, computer virus attacks are getting very advanced. New obfuscated computer virus created by computer virus writers will generate a new shape of computer virus automatically for every single iteration and download. This constantly evolving computer virus has caused significant threat to information security of computer users, organizations and even government. However, signature based detection technique which is used by the conventional anti-computer virus software in the market fails to identify it as signatures are unavailable. This research proposed an alternative approach to the traditional signature based detection method and investigated the use of machine learning technique for obfuscated computer virus detection. In this work, text strings are used and have been extracted from virus program codes as the features to generate a suitable classifier model that can correctly classify obfuscated virus files. Text string feature is used as it is informative and potentially only use small amount of memory space. Results show that unknown files can be correctly classified with 99.5% accuracy using SMO classifier model. Thus, it is believed that current computer virus defense can be strengthening through machine learning approach.
Cyber Warfare is the current single greatest emerging threat to National Security. Network security has become an essential component of any computer network. As computer networks and systems become ever more fundamental to modern society, concerns about security has become increasingly important. There are a multitude of different applications open source and proprietary available for the protection +-system administrator, to decide on the most suitable format for their purpose requires knowledge of the available safety measures, their features and how they affect the quality of service, as well as the kind of data they will be allowing through un flagged. A majority of methods currently used to ensure the quality of a networks service are signature based. From this information, and details on the specifics of popular applications and their implementation methods, we have carried through the ideas, incorporating our own opinions, to formulate suggestions on how this could be done on a general level. The main objective was to design and develop an Intrusion Detection System. While the minor objectives were to; Design a port scanner to determine potential threats and mitigation techniques to withstand these attacks. Implement the system on a host and Run and test the designed IDS. In this project we set out to develop a Honey Pot IDS System. It would make it easy to listen on a range of ports and emulate a network protocol to track and identify any individuals trying to connect to your system. This IDS will use the following design approaches: Event correlation, Log analysis, Alerting, and policy enforcement. Intrusion Detection Systems (IDSs) attempt to identify unauthorized use, misuse, and abuse of computer systems. In response to the growth in the use and development of IDSs, we have developed a methodology for testing IDSs. The methodology consists of techniques from the field of software testing which we have adapted for the specific purpose of testing IDSs. In this paper, we identify a set of general IDS performance objectives which is the basis for the methodology. We present the details of the methodology, including strategies for test-case selection and specific testing procedures. We include quantitative results from testing experiments on the Network Security Monitor (NSM), an IDS developed at UC Davis. We present an overview of the software platform that we have used to create user-simulation scripts for testing experiments. The platform consists of the UNIX tool expect and enhancements that we have developed, including mechanisms for concurrent scripts and a record-and-replay feature. We also provide background information on intrusions and IDSs to motivate our work.
Vulnerability scanners a proactive approach to assess web application securityijcsa
With the increasing concern for security in the network, many approaches are laid out that try to protect
the network from unauthorised access. New methods have been adopted in order to find the potential
discrepancies that may damage the network. Most commonly used approach is the vulnerability
assessment. By vulnerability, we mean, the potential flaws in the system that make it prone to the attack.
Assessment of these system vulnerabilities provide a means to identify and develop new strategies so as to
protect the system from the risk of being damaged. This paper focuses on the usage of various vulnerability
scanners and their related methodology to detect the various vulnerabilities available in the web
applications or the remote host across the network and tries to identify new mechanisms that can be
deployed to secure the network.
NETWORK INTRUSION DETECTION AND COUNTERMEASURE SELECTION IN VIRTUAL NETWORK (...ijsptm
Intrusion in a network or a system is a problem today as the trend of successful network attacks continue to
rise. Intruders can explore vulnerabilities of a network system to gain access in order to deploy some virus
or malware such as Denial of Service (DOS) attack. In this work, a frequency-based Intrusion Detection
System (IDS) is proposed to detect DOS attack. The frequency data is extracted from the time-series data
created by the traffic flow using Discrete Fourier Transform (DFT). An algorithm is developed for
anomaly-based intrusion detection with fewer false alarms which further detect known and unknown attack
signature in a network. The frequency of the traffic data of the virus or malware would be inconsistent with
the frequency of the legitimate traffic data. A Centralized Traffic Analyzer Intrusion Detection System
called CTA-IDS is introduced to further detect inside attackers in a network. The strategy is effective in
detecting abnormal content in the traffic data during information passing from one node to another and
also detects known attack signature and unknown attack. This approach is tested by running the artificial
network intrusion data in simulated networks using the Network Simulator2 (NS2) software.
Similar to A Smart Fuzzing Approach for Integer Overflow Detection (20)
Securing Cloud Computing Through IT GovernanceITIIIndustries
Lack of alignment between information technology (IT) and the business is a problem facing many organizations. Most organizations, today, fundamentally depend on IT. When IT and the business are aligned in an organization, IT delivers what the business needs and the business is able to deliver what the market needs. IT has become a strategic function for most organizations, and it is imperative that IT and business are aligned. IT governance is one of the most powerful ways to achieve IT to business alignment. Furthermore, as the use of cloud computing for delivering IT functions becomes pervasive, organizations using cloud computing must effectively apply IT governance to it. While cloud computing presents tremendous
opportunities, it comes with risks as well. Information security
is one of the top risks in cloud computing. Thus, IT governance must be applied to cloud computing information security to help manage the risks associated with cloud computing information security. This study advances knowledge by extending IT governance to cloud computing and information security governance.
Information Technology in Industry(ITII) - November Issue 2018ITIIIndustries
IT Industry publishes original research articles, review articles, and extended versions of conference papers. Articles resulting from research of both theoretical and/or practical natures performed by academics and/or industry practitioners are welcome. IT in Industry aims to become a leading IT journal with a high impact factor.
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACM’s recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term
master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
Dimensionality Reduction and Feature Selection Methods for Script Identificat...ITIIIndustries
The goal of this research is to explore effects of dimensionality reduction and feature selection on the problem of script identification from images of printed documents. The kadjacent segment is ideal for this use due to its ability to capture visual patterns. We have used principle component analysis to reduce the size of our feature matrix to a handier size that can be trained easily, and experimented by including varying combinations of dimensions of the super feature set. A modular
approach in neural network was used to classify 7 languages – Arabic, Chinese, English, Japanese, Tamil, Thai and Korean.
Image Matting via LLE/iLLE Manifold LearningITIIIndustries
Accurately extracting foreground objects is the problem of isolating the foreground in images and video, called image matting which has wide applications in digital photography. This problem is severely ill-posed in the sense that, at each pixel, one must estimate the foreground and background pixels and the so-called alpha value from only pixel information. The most recent work in natural image matting rely on local smoothness assumptions about foreground and background colours on which a cost function has been established. In this paper, we propose an extension to the class of affinity based matting techniques by incorporating local manifold structural
information to produce both a smoother matte based on the socalled improved Locally Linear Embedding. We illustrate our new algorithm using the standard benchmark images and very comparable results have been obtained.
Annotating Retina Fundus Images for Teaching and Learning Diabetic Retinopath...ITIIIndustries
With the improvement in IT industry, more and more application of computer software is introduced in teaching and learning. In this paper, we discuss the development process of such software. Diabetic Retinopathy is a common complication for diabetic patients. It may cause sight loss if not treated early. There are several stages of this disease. Fundus imagery is required to identify the stage and severity of the disease. Due to the lack of proper dataset of the fundus images and proper annotation, it is very difficult to perform research on this topic. Moreover, medical students are often facing difficulty with identifying the diseases in later stage of their practice as they may not have seen a sample of all of the stages of Diabetic Retinopathy problems. To mitigate the problem, we have collected fundus images from different geographic area of Bangladesh and designed an annotation software to store information about the patient, the infection level and their locations in the images. Sometimes, it is difficult to select all appropriate pixels of the infected region. To resolve the issue, we have introduced a K nearest neighbor (KNN) based technique to accurately select the region of interest (ROI). Once an expert (ophthalmologist) has annotated the images, the software can be used by the students for learning.
A Framework for Traffic Planning and Forecasting using Micro-Simulation Calib...ITIIIndustries
This paper presents the application of microsimulation for traffic planning and forecasting, and proposes a new framework to model complex traffic conditions by calibrating and adjusting traffic parameters of a microsimulation model. By using an open source micro-simulator package, TRANSIMS, in this study, animated and numerical results were produced and analysed. The framework of traffic model calibration was evaluated for its usefulness and practicality. Finally, we discuss future applications such as providing end users with real time traffic information through Intelligent Transport System (ITS) integration.
Investigating Tertiary Students’ Perceptions on Internet SecurityITIIIndustries
Internet security threats have grown from just simple viruses to various forms of computer hacking, scams, impersonation, cyber bullying, and spyware. The Internet has great influence on most people. It has profound influence and one can spend endless hours on internet activities. In particular, youth engage in more online activities than any other age group. Excessive internet usage is an emerging threat that has negative impacts on these youth; hence it is vital to investigate youths' online behavior. This work studies tertiary students’ risk awareness, and provides some findings that allow us to understand their knowledge on risks and their behavior towards online activities. It reveals several important online issues amongst tertiary students; Firstly, the lack of online security awareness; second, a lack of awareness and information about the dangers of rootkits, internet cookies and spyware; thirdly, female students are more unflinching than male students when commenting on social networking sites; fourthly, students are cautious only when obvious security warnings are present; and finally, their usage of internet hotspots is common without fully understanding its associated danger. These findings enable us to recommend types of internet security habits and safety practices that students should adopt in future when they are exposed to online activities. A more holistic approach was considered which aims to minimize any future risks and dangers with online activities involving students.
Blind Image Watermarking Based on Chaotic MapsITIIIndustries
Security of a watermark refers to its resistance to unauthorized detecting and decoding, while watermark robustness refers to the watermark’s resistance against common processing. Many watermarking schemes emphasize robustness more than security. However, a robust watermark is not enough to accomplish protection because the range of hostile attacks is not limited to common processing and distortions. In this paper, we give consideration to watermark security. To achieve this, we employ chaotic maps due to their extreme sensitivity to the initial values. If one fails to provide these values, the watermark will be wrongly extracted. While the chaotic maps provide perfect watermarking security, the proposed scheme is also intended to achieve robustness.
Programmatic detection of spatial behaviour in an agent-based modelITIIIndustries
The automated detection of aspects of spatial behaviour in an agent-based model is necessary for model testing and analysis. In this paper we compare four predictors of herding behaviour in a model of a grazing herbivore. We find that a) the mean number of neighbours adjusted to account for population variation and b) the mean Hamming distance between rows of the two-dimensional environment can be used to detect herding. Visual inspection of the model behaviour revealed that herding occurs when the herbivore mobility reaches a threshold level. Using this threshold we identify a limits for these predictors to use in the program code. These results apply only to one set of parameters and environment size; future research will involve a wider parameter space.
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACM’s recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
The banking experience for many people today is fundamentally an application of technology to be able to carry out their financial tasks. While the need to visit a bank branch remains essential for a number of activities, increasingly the need to support mobile usage is becoming the central focus of many bank strategies. The core banking systems that process financial transactions must remain highly available and able to support large volumes of activity. These systems represent a long term investment for banks and when the need arises to modernize these large systems, the transformation initiative is often very expensive and of high risk. We present in this paper our experiences in bank modernization and transformation, and outline the strategies for rolling out these large programs. As banking institutions embark upon transformation programs to upgrade their banking channels and core banking systems, it is hoped that the insights presented here are useful as a framework to support these initiatives.
Detecting Fraud Using Transaction Frequency DataITIIIndustries
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. In this paper, we present a fraud detection method which detects irregular frequency of transaction usage in an Enterprise Resource Planning (ERP) system. We discuss the design, development and empirical evaluation of outlier detection and distance measuring techniques to detect frequency-based anomalies within an individual user’s profile, relative to other similar users. Primarily, we propose three automated techniques: a univariate method, called Boxplot which is based on the sample’s median; and two multivariate methods which use Euclidean distance, for detecting transaction frequency anomalies within each transaction profile. The two multivariate approaches detect potentially fraudulent activities by identifying: (1) users where the Euclidean distance between their transaction-type set is above a certain threshold and (2) users/data points that lie far apart from other users/clusters or represent a small cluster size, using k-means clustering. The proposed methodology allows an auditor to investigate the transaction frequency anomalies and adjust the different parameters, such as the outlier threshold and the Euclidean distance threshold values to tune the number of alerts. The novelty of the proposed technique lies in its ability to automatically trigger alerts from transaction profiles, based on transaction usage performed over a period of time. Experiments were conducted using a real dataset obtained from the production client of a large organization using SAP R/3 (presently the most predominant ERP system), to run its business. The results of this empirical research demonstrate the effectiveness of the proposed approach.
Mapping the Cybernetic Principles of Viable System Model to Enterprise Servic...ITIIIndustries
This paper describes the results of a theoretical mapping of the cybernetic principles of the Viable System Model (VSM) to an Enterprise Service Bus (ESB) model, with the aim to identify the management principles for the integration of services at all levels in the enterprise. This enrichment directly contributes to the viability of service-oriented systems and the justification of Business/IT alignment within enterprise. The model was identified to be suitable for further adaption in the industrial setting planned within Australian governmental departments.
Using Watershed Transform for Vision-based Two-Hand Occlusion in an Interacti...ITIIIndustries
To achieve a natural interaction in augmented reality environment, we have suggested to use markerless visionbased two-handed gestures for the interaction; with an outstretched hand and a pointing hand used as virtual object registration plane and pointing device respectively. However, twohanded interaction always causes mutual occlusion which jeopardizes the hand gesture recognition. In this paper, we present a solution for two-hand occlusion by using watershed transform. The main idea is to start from a two-hand occlusion image in binary format, then form a grey-scale image based on the distance of each non-object pixel to object pixel. The watershed algorithm is applied to the negation of the grey scaled image to form watershed lines which separate the two hands. Fingertips are then identified and each hand is recognized based on the number of fingertips on each hand. The outstretched hand is assumed to contain 5 fingertips and the pointing device contains less than 5 fingertips. An example of applying our result in hand and virtual object interaction is displayed at the end of the paper.
Speech Feature Extraction and Data VisualisationITIIIndustries
—This paper presents a signal processing approach to analyse and identify accent discriminative features of four groups of English as a second language (ESL) speakers, including Chinese, Indian, Japanese, and Korean. The features used for speech recognition include pitch, stress, formant frequencies, the Mel frequency coefficient, log frequency coefficient, and the intensity and duration of vowels spoken. This paper presents our study using the Matlab Speech Analysis Toolbox, and highlights how data processing can be automated and results visualised. The proposed algorithm achieved an average success rate of 57.3% in identifying vowels spoken in a speech by the four nonnative English speaker groups.
Bayesian-Network-Based Algorithm Selection with High Level Representation Fee...ITIIIndustries
A real-world intelligent system consists of three basic modules: environment recognition, prediction (or estimation), and behavior planning. To obtain high quality results in these modules, high speed processing and real time adaptability on a case by case basis are required. In the environment recognition module many different algorithms and algorithm networks exist with varying performance. Thus, a mechanism that selects the best possible algorithm is required. To solve this problem we are using an algorithm selection approach to the problem of natural image understanding. This selection mechanism is based on machine learning; a bottom-up algorithm selection from real-world image features and a top-down algorithm selection using information obtained from a high level symbolic world description and algorithm suitability. The algorithm selection method iterates for each input image until the high-level description cannot be improved anymore. In this paper we present a method of iterative composition of the high level description. This step by step approach allows us to select the best result for each region of the image by evaluating all the intermediary representations and finally keep only the best one.
Instance Selection and Optimization of Neural NetworksITIIIndustries
Credit scoring is an important tool in financial institutions, which can be used in credit granting decision. Credit applications are marked by credit scoring models and those with high marks will be treated as “good”, while those with low marks will be regarded as “bad”. As data mining technique develops, automatic credit scoring systems are warmly welcomed for their high efficiency and objective judgments. Many machine learning algorithms have been applied in training credit scoring models, and ANN is one of them with good performance. This paper presents a higher accuracy credit scoring model based on MLP neural networks trained with back propagation algorithm. Our work focuses on enhancing credit scoring models in three aspects: optimize data distribution in datasets using a new method called Average Random Choosing; compare effects of training-validation-test instances numbers; and find the most suitable number of hidden units. Another contribution of this paper is summarizing the tendency of scoring accuracy of models when the number of hidden units increases. The experiment results show that our methods can achieve high credit scoring accuracy with imbalanced datasets. Thus, credit granting decision can be made by data mining methods using MLP neural networks.
Signature Forgery and the Forger – An Assessment of Influence on Handwritten ...ITIIIndustries
Signatures are widely used as a form of personal authentication. Despite ubiquity in deployment, individual signatures are relatively easy to forge, especially when only the static ‘pictorial’ outcome of the signature is considered at verification time. In this study, we explore opinions on signature usage for verification purposes, and how individuals rate a particular third-party signature in terms of ease of forgeability and their own ability to forge. We examine responses with respect to an individual’s experience of the forgeability/complexity of their own signature. Our study shows that past experience does not generally have an effect on perceived signature complexity nor the perceived effectiveness of an individual to themselves forge a signature. In assessing forgeability, most subjects cite the overall signature complexity and distinguishing features in reaching this decision. Furthermore, our research indicates that individuals typically vary their signature according to the scenario but generally little effort into the production of the signature.
Stability of Individuals in a Fingerprint System across Force LevelsITIIIndustries
This research studied the question: “Are all
individual’s performance stable in a fingerprint recognition
system?” The fingerprints of 154 individuals, provided at
different force levels, were examined using the biometric
menagerie tool, first coined by Doddington et al. in 1998. The
Biometric Menagerie illustrates how each person in a given
dataset performs in a biometric system, by using their genuine
and impostor scores, and providing them a classification based
upon those scores. This research examined the biometric
menagerie classifications across different force levels in a
fingerprint recognition study to uncover if individuals performed
the same over five force levels. The study concluded that they did
not, and a new metric has been created to quantify this
phenomenon. As a result of this discovery, the new metric,
Stability Score Index is described to showcase the movement of
individuals in the menagerie.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.