This document summarizes a research paper that proposes a solution for detecting Distributed Denial of Service (DDoS) attacks in cloud computing environments. The solution involves deploying Intrusion Detection Systems (IDSs) in each virtual machine to detect attacks. When attacks are detected, alerts are stored in a database in the Cloud Fusion Unit on the front-end server. The Cloud Fusion Unit then analyzes the alerts from the multiple IDSs using Dempster-Shafer theory and fault tree analysis. It combines the evidence from the different IDSs using Dempster's combination rule to fuse the data and reduce false alarms. The goal is to more accurately detect DDoS attacks targeting cloud services by correl
An Empirical Study for Defect Prediction using Clusteringidescitation
Reliably predicting defects in the software is one of
the holy grails of software engineering. Researchers have
devised and implemented a method of defect prediction
approaches varying in terms of accuracy, complexity, and the
input data they require. An accurate prediction of the number
of defects in a software product during system testing
contributes not only to the management of the system testing
process but also to the estimation of the product’s required
maintenance [1]. A prediction of the number of remaining
defects in an inspected artefact can be used for decision making.
Defective software modules cause software failures, increase
development and maintenance costs, and decrease customer
satisfaction. It strives to improve software quality and testing
efficiency by constructing predictive models from code
attributes to enable a timely identification of fault-prone
modules [2]. In this paper, we will discuss clustering techniques
are used for software defect prediction. This helps the
developers to detect software defects and correct them.
Unsupervised techniques may be used for defect prediction in
software modules, more so in those cases where defect labels
are not available [3].
Integration of feature sets with machine learning techniquesiaemedu
This document summarizes a research paper that proposes a novel approach for spam filtering using selective feature sets combined with machine learning techniques. The paper presents an algorithm and system architecture that extracts feature sets from emails and uses machine learning to classify emails and generate rules to identify spam. Several metrics are identified to evaluate the efficiency of the feature sets, including false positive rate. An experiment is described that uses keyword lists as feature sets to train filters and compares the proposed approach to other spam filtering methods.
Implementation of digital image watermarking techniques using dwt and dwt svd...eSAT Journals
Abstract
These days, in every field there is gigantic utilization of computerized substance. Data took care of on web and mixed media system framework is in advanced structure. Computerized watermarking is only the innovation in which there is inserting of different data in advanced substance, which we need to shield from illicit replicating. Computerized picture watermarking is concealing data in any structure (content, picture, sound and video) in unique picture without corrupting its perceptual quality. On the off chance that of Discrete Wavelet Transform (DWT), deterioration of the first picture is completed to insert the watermark. Moreover, if there should arise an occurrence of cross breed system (DWT-SVD) firstly picture is decayed by and after that watermark is installed in solitary qualities acquired by application of Singular Value Decomposition (SVD). DWT and SVD are utilized in combination to enhance the nature of watermarking. We have the procedures which are looked at on the premise of Peak Signal to Noise Ratio (PSNR) esteem at various benefits of scaling component; high estimation of PSNR is coveted because it displays great intangibility of the strategy.
High performance intrusion detection using modified k mean & naïve bayeseSAT Journals
Abstract
Internet Technology is growing at exponential rate day by day, making data security of computer systems more complex and critical. There has been multiple methodology implemented for the same in recent time as detailed in [1], [3]. Availability of larger bandwidth has made the multiple large computer server network connected worldwide and thus increasing the load on the necessity to secure data and Intrusion detection system (IDS) is one of the most efficient technique to maintain security of computer system. The proposed system is designed in such a way that are helpful in identifying malicious behavior and improper use of computer system. In this report we proposed a hybrid technique for intrusion detection using data mining algorithms. Our main objective is to do complete analysis of intrusion detection Dataset to test the implemented system.In This report we will propose a new methodology in which Modified k-mean is used for clustering whereas Naïve Bayes for the classification. These two data mining techniques will be used for Intrusion detection in large horizontally distributed database.
Keywords: Intrusion Detection, Modified K-Mean, Naïve Bays
This document summarizes research on transaction reordering techniques. It discusses transaction reordering approaches based on reducing resource conflicts and increasing resource sharing. Specifically, it covers:
1) A "steal-on-abort" technique that reorders an aborted transaction behind the transaction that caused the abort to avoid repeated conflicts.
2) A replication protocol that attempts to reorder transactions during certification to avoid aborts rather than restarting immediately.
3) Transaction reordering and grouping during continuous data loading to prevent deadlocks when loading data for materialized join views.
Adaptive job scheduling with load balancing for workflow applicationiaemedu
This document discusses adaptive job scheduling with load balancing for workflow applications in a grid platform. It begins with an abstract that describes grid computing and how scheduling plays a key role in performance for grid workflow applications. Both static and dynamic scheduling strategies are discussed, but they require high scheduling costs and may not produce good schedules. The paper then proposes a novel semi-dynamic algorithm that allows the schedule to adapt to changes in the dynamic grid environment through both static and dynamic scheduling. Load balancing is incorporated to handle situations where jobs are delayed due to resource fluctuations or overloading of processors. The rest of the paper outlines the related works, proposed scheduling algorithm, system model, and evaluation of the approach.
A FUZZY MATHEMATICAL MODEL FOR PEFORMANCE TESTING IN CLOUD COMPUTING USING US...ijseajournal
Software product development life cycle has software testing as an integral part. Conventional testing
requires dedicated infrastructure and resources that are expensive and only used sporadically. In the
growing complexity of business applications it is harder to build in-house testing facilities and also to
maintain that mimic real-time environments. By nature, cloud computing provides resources which are
unlimited in nature along with flexibility, scalability and availability of distributed testing environment,
thus it has opened up new opportunities for software testing. It leads to cost-effective solutions by reducing
the execution time of large application testing. As a part of infrastructure resource, cloud testing can attain
its efficiency by taking care of the parameters like network traffic, Disk Storage and RAM speed. In this
paper we propose a new fuzzy mathematical model to attain better scope for the above parameters.
Fuzzy Type Image Fusion Using SPIHT Image Compression TechniqueIJERA Editor
This paper presents a fuzzy type image fusion technique using Set Partitioning in Hierarchical Trees (SPIHT).
It is concluded that fusion with higher single levels provides better fusion quality. This technique can be used
for fusion of fuzzy images as well as multi model image fusion. The proposed algorithm is very simple, easy to
implement and could be used for real time applications. This is paper also provided comparatively studied
between proposed and previous existing technique and validation of the proposed algorithm as Peak Signal to
Noise Ratio (PSNR), Root Mean Square Error (RMSE).
An Empirical Study for Defect Prediction using Clusteringidescitation
Reliably predicting defects in the software is one of
the holy grails of software engineering. Researchers have
devised and implemented a method of defect prediction
approaches varying in terms of accuracy, complexity, and the
input data they require. An accurate prediction of the number
of defects in a software product during system testing
contributes not only to the management of the system testing
process but also to the estimation of the product’s required
maintenance [1]. A prediction of the number of remaining
defects in an inspected artefact can be used for decision making.
Defective software modules cause software failures, increase
development and maintenance costs, and decrease customer
satisfaction. It strives to improve software quality and testing
efficiency by constructing predictive models from code
attributes to enable a timely identification of fault-prone
modules [2]. In this paper, we will discuss clustering techniques
are used for software defect prediction. This helps the
developers to detect software defects and correct them.
Unsupervised techniques may be used for defect prediction in
software modules, more so in those cases where defect labels
are not available [3].
Integration of feature sets with machine learning techniquesiaemedu
This document summarizes a research paper that proposes a novel approach for spam filtering using selective feature sets combined with machine learning techniques. The paper presents an algorithm and system architecture that extracts feature sets from emails and uses machine learning to classify emails and generate rules to identify spam. Several metrics are identified to evaluate the efficiency of the feature sets, including false positive rate. An experiment is described that uses keyword lists as feature sets to train filters and compares the proposed approach to other spam filtering methods.
Implementation of digital image watermarking techniques using dwt and dwt svd...eSAT Journals
Abstract
These days, in every field there is gigantic utilization of computerized substance. Data took care of on web and mixed media system framework is in advanced structure. Computerized watermarking is only the innovation in which there is inserting of different data in advanced substance, which we need to shield from illicit replicating. Computerized picture watermarking is concealing data in any structure (content, picture, sound and video) in unique picture without corrupting its perceptual quality. On the off chance that of Discrete Wavelet Transform (DWT), deterioration of the first picture is completed to insert the watermark. Moreover, if there should arise an occurrence of cross breed system (DWT-SVD) firstly picture is decayed by and after that watermark is installed in solitary qualities acquired by application of Singular Value Decomposition (SVD). DWT and SVD are utilized in combination to enhance the nature of watermarking. We have the procedures which are looked at on the premise of Peak Signal to Noise Ratio (PSNR) esteem at various benefits of scaling component; high estimation of PSNR is coveted because it displays great intangibility of the strategy.
High performance intrusion detection using modified k mean & naïve bayeseSAT Journals
Abstract
Internet Technology is growing at exponential rate day by day, making data security of computer systems more complex and critical. There has been multiple methodology implemented for the same in recent time as detailed in [1], [3]. Availability of larger bandwidth has made the multiple large computer server network connected worldwide and thus increasing the load on the necessity to secure data and Intrusion detection system (IDS) is one of the most efficient technique to maintain security of computer system. The proposed system is designed in such a way that are helpful in identifying malicious behavior and improper use of computer system. In this report we proposed a hybrid technique for intrusion detection using data mining algorithms. Our main objective is to do complete analysis of intrusion detection Dataset to test the implemented system.In This report we will propose a new methodology in which Modified k-mean is used for clustering whereas Naïve Bayes for the classification. These two data mining techniques will be used for Intrusion detection in large horizontally distributed database.
Keywords: Intrusion Detection, Modified K-Mean, Naïve Bays
This document summarizes research on transaction reordering techniques. It discusses transaction reordering approaches based on reducing resource conflicts and increasing resource sharing. Specifically, it covers:
1) A "steal-on-abort" technique that reorders an aborted transaction behind the transaction that caused the abort to avoid repeated conflicts.
2) A replication protocol that attempts to reorder transactions during certification to avoid aborts rather than restarting immediately.
3) Transaction reordering and grouping during continuous data loading to prevent deadlocks when loading data for materialized join views.
Adaptive job scheduling with load balancing for workflow applicationiaemedu
This document discusses adaptive job scheduling with load balancing for workflow applications in a grid platform. It begins with an abstract that describes grid computing and how scheduling plays a key role in performance for grid workflow applications. Both static and dynamic scheduling strategies are discussed, but they require high scheduling costs and may not produce good schedules. The paper then proposes a novel semi-dynamic algorithm that allows the schedule to adapt to changes in the dynamic grid environment through both static and dynamic scheduling. Load balancing is incorporated to handle situations where jobs are delayed due to resource fluctuations or overloading of processors. The rest of the paper outlines the related works, proposed scheduling algorithm, system model, and evaluation of the approach.
A FUZZY MATHEMATICAL MODEL FOR PEFORMANCE TESTING IN CLOUD COMPUTING USING US...ijseajournal
Software product development life cycle has software testing as an integral part. Conventional testing
requires dedicated infrastructure and resources that are expensive and only used sporadically. In the
growing complexity of business applications it is harder to build in-house testing facilities and also to
maintain that mimic real-time environments. By nature, cloud computing provides resources which are
unlimited in nature along with flexibility, scalability and availability of distributed testing environment,
thus it has opened up new opportunities for software testing. It leads to cost-effective solutions by reducing
the execution time of large application testing. As a part of infrastructure resource, cloud testing can attain
its efficiency by taking care of the parameters like network traffic, Disk Storage and RAM speed. In this
paper we propose a new fuzzy mathematical model to attain better scope for the above parameters.
Fuzzy Type Image Fusion Using SPIHT Image Compression TechniqueIJERA Editor
This paper presents a fuzzy type image fusion technique using Set Partitioning in Hierarchical Trees (SPIHT).
It is concluded that fusion with higher single levels provides better fusion quality. This technique can be used
for fusion of fuzzy images as well as multi model image fusion. The proposed algorithm is very simple, easy to
implement and could be used for real time applications. This is paper also provided comparatively studied
between proposed and previous existing technique and validation of the proposed algorithm as Peak Signal to
Noise Ratio (PSNR), Root Mean Square Error (RMSE).
New Design Architecture of Chaotic Secure Communication System Combined with ...ijtsrd
In this paper, the exponential synchronization of secure communication system is introduced and a novel secure communication design combined with linear receiver is constructed to ensure the global exponential stability of the resulting error signals. Besides, the guaranteed exponential convergence rate of the proposed secure communication system can be correctly calculated. Finally, some numerical simulations are offered to demonstrate the correctness and feasibility of the obtained results. Yeong-Jeu Sun "New Design Architecture of Chaotic Secure Communication System Combined with Linear Receiver" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-1 , December 2020, URL: https://www.ijtsrd.com/papers/ijtsrd38214.pdf Paper URL : https://www.ijtsrd.com/engineering/electrical-engineering/38214/new-design-architecture-of-chaotic-secure-communication-system-combined-with-linear-receiver/yeongjeu-sun
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
SIGNIFICANCE OF RATIONAL 6TH ORDER DISTORTION MODEL IN THE FIELD OF MOBILE’S ...P singh
The document discusses a proposed method for video watermarking that uses spatial and frequency domain techniques for embedding watermark information, and tests the method's robustness against rational 6th order distortion. The key steps are: (1) extracting frames from a video and selecting the highest entropy frame, (2) using spread spectrum and LSB techniques to embed a watermark in the spatial domain and DWT in the frequency domain, (3) applying rational 6th order distortion to test the effect on the watermarked video, (4) calculating metrics like correlation, SSIM, PSNR, BER and MSE to evaluate the method and detect the watermark from the distorted video. The results show the values of correlation and SSIM
An ideal steganographic scheme in networks using twisted payloadeSAT Journals
Abstract With the rapid development of network technology, information security has become a mounting problem. Steganography involves hiding information in a cover media, in such a way that the cover media is not supposed to have any confidential message for its unintentional addressee In this paper, an ideal steganographic scheme in networks is proposed using twisted payload. The confidential image values are twisted by using scrambling techiques.The Discrete Wavelet Transform (DWT) is applied on cover image and Integer Wavelet Transform (IWT) is applied to the scrambled confidential image. Merge operation is done on both images and Inverse DWT is computed on the same to get the stego image. The information hiding algorithm is the reverse process of the extracting algorithm. After this an ideal steganographic scheme is applied which generates a stego image which is immune against conventional attack and performs good perceptibility compared to other steganographic approaches. Index Terms: Network security, Steganography, Discrete Wavelet Transform, Integer Wavelet Transform, Modified Arnold Transform, Merge Operation, Quality Measures
An approach for ids by combining svm and ant colony algorithmeSAT Journals
This summarizes a research paper that proposes a new approach called CSVAC (Combined Support Vector with Ant Colony) for intrusion detection. The approach combines two algorithms, Support Vector Machine (SVM) and Ant Colony Optimization (ACO), to classify network data as normal or abnormal. SVM is used to generate a separating hyperplane and find support vectors, while ACO performs clustering around the support vectors. The clusters are added to the SVM training set and it is retrained in an iterative process until the detection rate exceeds a threshold. The paper evaluates this approach on the standard KDD99 dataset and finds it achieves superior results to other algorithms in terms of accuracy and efficiency.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
Many studies have been done in the area of Wireless Sensor Networks (WSNs) in recent years. In this kind of networks, some of the key objectives that need to be satisfied are area coverage, number of active sensors and energy consumed by nodes. In this paper, we propose a NSGA-II based multi-objective algorithm for optimizing all of these objectives simultaneously. The efficiency of our algorithm is demonstrated in the simulation results. This efficiency can be shown as finding the optimal balance point among the maximum coverage rate, the least energy consumption, and the minimum number of active nodes while maintaining the connectivity of the network
Deep Convolutional Neural Network based Intrusion Detection SystemSri Ram
In the present era, the cyberspace is growing tremendously and Intrusion detection system (IDS) plays an key role in it to ensure the information security. The IDS, which works in network and host level, should be capable of identifying various malicious attacks. The job of network based IDS is to differentiate between normal and malicious traffic data and raise alert in case of an attack. Apart from the traditional signature and anomaly based approaches, many researchers have employed various Deep Learning (DL) techniques for detecting intrusion as DL models are capable of extracting salient features automatically from the input data. The application of Deep Convolutional Neural Network (DCNN), which is utilized quite often for solving research problems in image processing and vision fields, is not explored much for IDS. In this paper, a DCNN architecture for IDS which is trained on KDDCUP 99 data set is proposed. This work also shows that the DCNN-IDS model performs superior when compared with other existing works.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
Image Steganography Based On Non Linear Chaotic AlgorithmIJARIDEA Journal
Abstract— Late inquires about of picture steganography have been progressively in light of clamorous frameworks, yet the disadvantages of little key space and powerless security in one-dimensional disorderly cryptosystems are self-evident. This paper presents steganography with nonlinear riotous calculation (NCA) which utilizes control capacity and digression work rather than straight capacity. Its basic parameters are acquired by test examination. The message with the key is then consolidated with the cover picture utilizing LSB installing and discrete cosine change.
Keywords— Discrete Cosine Transform, LSB Embedding, Nonlinear Chaotic Algorithm, Steganography, Stego Image.
Data Hiding and Retrieval using Visual CryptographyAM Publications
Nath et al. developed several methods for hiding data in a cover file using different steganography
methods. In some methods Nath et al. first applied encryption method before hiding into the cover file. For security
reasons the secret message is encrypted first before inserting into the cover file. To make the system more complex the
authors used some random insertion of bits so that even if the intruders can extract the bits from cover file but they
cannot reconstruct the original secret message. In the present work the authors applied different data hiding
algorithm based on visual cryptography. Visual Cryptography is now a days a very popular method for hiding any
secret message inside multiple shares. Initially people were trying to hide some secret message which is simply B/W in
two shares. But slowly the researchers started to hide any color image (may be text or image or any object) in two or
more shares. In the present work the authors tried to hide any color message/image in two or more shares. The
interesting part of the present method is that from one share it impossible to create the second share or to extract the
hidden secret message from one share without having the other share(s). The present method may be used for
reconstructing password or any kind of important message or image. The present method may be applied in forensic
department or in defense for sending some confidential message
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Entropy based Digital Watermarking using 2-D Biorthogonal WAVELETpaperpublications3
Abstract: The Security is the most important aspect of Database, for maintain the integrity and as well as security of the system image watermarking is technique proposed at the year of 1996, in this paper we also implement image watermarking using 2-D biorthogonal Wavelt. The importance of transmitting digital information in digital watermarking system and the dissymmetric digital watermarking framework lived on media content communication is also being discussed in this paper. Then we apply watermarking embedding algorithm to keep the balance between watermarks’ imperceptibility and its robustness while the data is being sent on the communication channel.Keywords: Discrete Wavelet Transform (DWT), Gray Scale, Peak Signal to Noise Ratio (PSNR).
Title: Entropy based Digital Watermarking using 2-D Biorthogonal WAVELET
Author: Abhinav Kumar
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
This document summarizes various image compression techniques including lossy and lossless methods. For lossy compression, it discusses transformation coding techniques like discrete cosine transform (DCT) and discrete wavelet transform (DWT). It also covers vector quantization, fractal coding, block truncation coding, and subband coding. For lossless techniques, it describes run length encoding, Huffman coding, LZW coding, and area coding. Finally, it provides an overview of using neural networks for image compression, including backpropagation networks, hierarchical/adaptive networks, and multilayer perceptron networks.
This document summarizes image compression techniques. It discusses how image compression aims to reduce the number of bits required to represent an image by removing spatial and spectral redundancies. The techniques are classified as lossless or lossy. Lossy techniques discussed include discrete cosine transform coding, discrete wavelet transform coding, vector quantization, fractal coding, and block truncation coding. Key metrics for evaluating compression performance and quality, such as mean square error and peak signal-to-noise ratio, are also defined.
On the Internet Delay Space DimensionalityBruno Abrahao
We investigate the dimensionality properties of the Internet delay space, i.e., the matrix of measured round-trip latencies between Internet hosts. Previous work on network coordinates has indicated that this matrix can be embedded,
with reasonably low distortion, into a 4- to 9-dimensional Euclidean space. The application of Principal Component Analysis (PCA) reveals the same dimensionality values. Our work addresses the question: to what extent is the dimensionality an intrinsic property of the delay space, defined without reference to a host metric such as Euclidean space? Is the intrinsic dimensionality of the Internet delay space
approximately equal to the dimension determined using embedding techniques or PCA? If not, what explains the discrepancy? What properties of the network contribute to
its overall dimensionality? Using datasets obtained via the King [14] method, we study different measures of dimensionality to establish the following conclusions. First, based on its power-law behavior, the structure of the delay space can be better characterized by fractal measures. Second, the intrinsic dimension is significantly smaller than the value
predicted by the previous studies; in fact by our measures it is less than 2. Third, we demonstrate a particular way in which the AS topology is reflected in the delay space; subnetworks composed of hosts which share an upstream Tier-1 autonomous system in common possess lower dimensionality than the combined delay space. Finally, we observe that
fractal measures, due to their sensitivity to non-linear structures, display higher precision for measuring the influence of subtle features of the delay space geometry.
A comparatively study on visual cryptographyeSAT Journals
Abstract The effective and secure protections of sensitive information are primary concerns in commercial, medical and military systems. To address the reliability problems for secret images, a visual cryptography scheme is a good alternative to remedy the vulnerabilities. Visual cryptography is a very secure and unique way to protect secrets. Visual cryptography is an encryption technique which is used to hide information which is present in an image. Unliketraditional cryptographic schemes, it uses human eyes to recover the secret without any complex decryption algorithms and the facilitate of computers. It is a secret sharing scheme which uses images distributed as shares such that, when the shares are superimposed, a hidden secret image is revealed.In this paper we represent various cryptography technique and research work done in this field. Keywords: Secret image sharing, cryptography, visual quality of image, pixel expansion
MEANINGFUL AND UNEXPANDED SHARES FOR VISUAL SECRET SHARING SCHEMESijiert bestjournal
In today�s internet world it is very essential to secretly share biometric data stored in the central database. There are so many options to secretly share biometri c data using cryptographic computation. This work reviews and applies a perfectly secure method to secretly share biometric data,for possible use in biometric authentication and protection based on conc ept of visual cryptography. The basic concept of proposed approach is to secretly share private imag e into two meaningful and unexpanded shares (sheets) that are stored in two separate database servers such that decryption can be performed only when both shares are simultaneously available;at the same ti me,the individual share do not open identity of the private image. Previous research,such as Arun Ross et al. in 2011,was using pixel expansion for encryption,which causes the waste of storage space and transmission time. Furthermore,some researcher such as Hou and Quan�s research in 2011,producing m eaningless shares,which causes visually revealing existence of secret image. In this work,we review visual cryptography scheme and apply them to secretly share biometric data such as fingerprint,face images for the purpose of user authentication. So,using this technique we can secretly share biometric data over internet and only authorized user can decrypt the information.
This document discusses data hiding techniques for images. It begins by introducing steganography and some common image steganography methods like LSB substitution, blocking, and palette modification. It then reviews related work on minimizing distortion in steganography, modifying matrix encoding for minimal distortion, and designing adaptive steganographic schemes. The document proposes using a universal distortion measure to evaluate embedding changes independently of the domain. It presents a system for reversible data hiding in encrypted images that partitions the image, encrypts it, hides data in the encrypted image, and allows extraction from the decrypted or encrypted image. Least significant bit substitution is discussed as an approach for hiding data in the encrypted image.
The document summarizes research on trends in selective abortions of girls in India from 1980 to 2010. It finds that selective abortion of girls has increased over the last two decades, with 4.2 to 12.1 million girls aborted over this period. Selective abortion is now common across India and is most prevalent among wealthy, educated households and families who already have one daughter.
The survey of 1940 pediatricians in India found that 61% observed more celebrations at the birth of baby boys. 52% saw that baby boys had more appropriate weight for their age. 57% clinically observed more malnutrition in girls. 41% found that parents were more inclined to breastfeed baby boys. 61% noticed that school dropouts were more common among baby girls.
New Design Architecture of Chaotic Secure Communication System Combined with ...ijtsrd
In this paper, the exponential synchronization of secure communication system is introduced and a novel secure communication design combined with linear receiver is constructed to ensure the global exponential stability of the resulting error signals. Besides, the guaranteed exponential convergence rate of the proposed secure communication system can be correctly calculated. Finally, some numerical simulations are offered to demonstrate the correctness and feasibility of the obtained results. Yeong-Jeu Sun "New Design Architecture of Chaotic Secure Communication System Combined with Linear Receiver" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-1 , December 2020, URL: https://www.ijtsrd.com/papers/ijtsrd38214.pdf Paper URL : https://www.ijtsrd.com/engineering/electrical-engineering/38214/new-design-architecture-of-chaotic-secure-communication-system-combined-with-linear-receiver/yeongjeu-sun
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
SIGNIFICANCE OF RATIONAL 6TH ORDER DISTORTION MODEL IN THE FIELD OF MOBILE’S ...P singh
The document discusses a proposed method for video watermarking that uses spatial and frequency domain techniques for embedding watermark information, and tests the method's robustness against rational 6th order distortion. The key steps are: (1) extracting frames from a video and selecting the highest entropy frame, (2) using spread spectrum and LSB techniques to embed a watermark in the spatial domain and DWT in the frequency domain, (3) applying rational 6th order distortion to test the effect on the watermarked video, (4) calculating metrics like correlation, SSIM, PSNR, BER and MSE to evaluate the method and detect the watermark from the distorted video. The results show the values of correlation and SSIM
An ideal steganographic scheme in networks using twisted payloadeSAT Journals
Abstract With the rapid development of network technology, information security has become a mounting problem. Steganography involves hiding information in a cover media, in such a way that the cover media is not supposed to have any confidential message for its unintentional addressee In this paper, an ideal steganographic scheme in networks is proposed using twisted payload. The confidential image values are twisted by using scrambling techiques.The Discrete Wavelet Transform (DWT) is applied on cover image and Integer Wavelet Transform (IWT) is applied to the scrambled confidential image. Merge operation is done on both images and Inverse DWT is computed on the same to get the stego image. The information hiding algorithm is the reverse process of the extracting algorithm. After this an ideal steganographic scheme is applied which generates a stego image which is immune against conventional attack and performs good perceptibility compared to other steganographic approaches. Index Terms: Network security, Steganography, Discrete Wavelet Transform, Integer Wavelet Transform, Modified Arnold Transform, Merge Operation, Quality Measures
An approach for ids by combining svm and ant colony algorithmeSAT Journals
This summarizes a research paper that proposes a new approach called CSVAC (Combined Support Vector with Ant Colony) for intrusion detection. The approach combines two algorithms, Support Vector Machine (SVM) and Ant Colony Optimization (ACO), to classify network data as normal or abnormal. SVM is used to generate a separating hyperplane and find support vectors, while ACO performs clustering around the support vectors. The clusters are added to the SVM training set and it is retrained in an iterative process until the detection rate exceeds a threshold. The paper evaluates this approach on the standard KDD99 dataset and finds it achieves superior results to other algorithms in terms of accuracy and efficiency.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
Many studies have been done in the area of Wireless Sensor Networks (WSNs) in recent years. In this kind of networks, some of the key objectives that need to be satisfied are area coverage, number of active sensors and energy consumed by nodes. In this paper, we propose a NSGA-II based multi-objective algorithm for optimizing all of these objectives simultaneously. The efficiency of our algorithm is demonstrated in the simulation results. This efficiency can be shown as finding the optimal balance point among the maximum coverage rate, the least energy consumption, and the minimum number of active nodes while maintaining the connectivity of the network
Deep Convolutional Neural Network based Intrusion Detection SystemSri Ram
In the present era, the cyberspace is growing tremendously and Intrusion detection system (IDS) plays an key role in it to ensure the information security. The IDS, which works in network and host level, should be capable of identifying various malicious attacks. The job of network based IDS is to differentiate between normal and malicious traffic data and raise alert in case of an attack. Apart from the traditional signature and anomaly based approaches, many researchers have employed various Deep Learning (DL) techniques for detecting intrusion as DL models are capable of extracting salient features automatically from the input data. The application of Deep Convolutional Neural Network (DCNN), which is utilized quite often for solving research problems in image processing and vision fields, is not explored much for IDS. In this paper, a DCNN architecture for IDS which is trained on KDDCUP 99 data set is proposed. This work also shows that the DCNN-IDS model performs superior when compared with other existing works.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
Image Steganography Based On Non Linear Chaotic AlgorithmIJARIDEA Journal
Abstract— Late inquires about of picture steganography have been progressively in light of clamorous frameworks, yet the disadvantages of little key space and powerless security in one-dimensional disorderly cryptosystems are self-evident. This paper presents steganography with nonlinear riotous calculation (NCA) which utilizes control capacity and digression work rather than straight capacity. Its basic parameters are acquired by test examination. The message with the key is then consolidated with the cover picture utilizing LSB installing and discrete cosine change.
Keywords— Discrete Cosine Transform, LSB Embedding, Nonlinear Chaotic Algorithm, Steganography, Stego Image.
Data Hiding and Retrieval using Visual CryptographyAM Publications
Nath et al. developed several methods for hiding data in a cover file using different steganography
methods. In some methods Nath et al. first applied encryption method before hiding into the cover file. For security
reasons the secret message is encrypted first before inserting into the cover file. To make the system more complex the
authors used some random insertion of bits so that even if the intruders can extract the bits from cover file but they
cannot reconstruct the original secret message. In the present work the authors applied different data hiding
algorithm based on visual cryptography. Visual Cryptography is now a days a very popular method for hiding any
secret message inside multiple shares. Initially people were trying to hide some secret message which is simply B/W in
two shares. But slowly the researchers started to hide any color image (may be text or image or any object) in two or
more shares. In the present work the authors tried to hide any color message/image in two or more shares. The
interesting part of the present method is that from one share it impossible to create the second share or to extract the
hidden secret message from one share without having the other share(s). The present method may be used for
reconstructing password or any kind of important message or image. The present method may be applied in forensic
department or in defense for sending some confidential message
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Entropy based Digital Watermarking using 2-D Biorthogonal WAVELETpaperpublications3
Abstract: The Security is the most important aspect of Database, for maintain the integrity and as well as security of the system image watermarking is technique proposed at the year of 1996, in this paper we also implement image watermarking using 2-D biorthogonal Wavelt. The importance of transmitting digital information in digital watermarking system and the dissymmetric digital watermarking framework lived on media content communication is also being discussed in this paper. Then we apply watermarking embedding algorithm to keep the balance between watermarks’ imperceptibility and its robustness while the data is being sent on the communication channel.Keywords: Discrete Wavelet Transform (DWT), Gray Scale, Peak Signal to Noise Ratio (PSNR).
Title: Entropy based Digital Watermarking using 2-D Biorthogonal WAVELET
Author: Abhinav Kumar
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
This document summarizes various image compression techniques including lossy and lossless methods. For lossy compression, it discusses transformation coding techniques like discrete cosine transform (DCT) and discrete wavelet transform (DWT). It also covers vector quantization, fractal coding, block truncation coding, and subband coding. For lossless techniques, it describes run length encoding, Huffman coding, LZW coding, and area coding. Finally, it provides an overview of using neural networks for image compression, including backpropagation networks, hierarchical/adaptive networks, and multilayer perceptron networks.
This document summarizes image compression techniques. It discusses how image compression aims to reduce the number of bits required to represent an image by removing spatial and spectral redundancies. The techniques are classified as lossless or lossy. Lossy techniques discussed include discrete cosine transform coding, discrete wavelet transform coding, vector quantization, fractal coding, and block truncation coding. Key metrics for evaluating compression performance and quality, such as mean square error and peak signal-to-noise ratio, are also defined.
On the Internet Delay Space DimensionalityBruno Abrahao
We investigate the dimensionality properties of the Internet delay space, i.e., the matrix of measured round-trip latencies between Internet hosts. Previous work on network coordinates has indicated that this matrix can be embedded,
with reasonably low distortion, into a 4- to 9-dimensional Euclidean space. The application of Principal Component Analysis (PCA) reveals the same dimensionality values. Our work addresses the question: to what extent is the dimensionality an intrinsic property of the delay space, defined without reference to a host metric such as Euclidean space? Is the intrinsic dimensionality of the Internet delay space
approximately equal to the dimension determined using embedding techniques or PCA? If not, what explains the discrepancy? What properties of the network contribute to
its overall dimensionality? Using datasets obtained via the King [14] method, we study different measures of dimensionality to establish the following conclusions. First, based on its power-law behavior, the structure of the delay space can be better characterized by fractal measures. Second, the intrinsic dimension is significantly smaller than the value
predicted by the previous studies; in fact by our measures it is less than 2. Third, we demonstrate a particular way in which the AS topology is reflected in the delay space; subnetworks composed of hosts which share an upstream Tier-1 autonomous system in common possess lower dimensionality than the combined delay space. Finally, we observe that
fractal measures, due to their sensitivity to non-linear structures, display higher precision for measuring the influence of subtle features of the delay space geometry.
A comparatively study on visual cryptographyeSAT Journals
Abstract The effective and secure protections of sensitive information are primary concerns in commercial, medical and military systems. To address the reliability problems for secret images, a visual cryptography scheme is a good alternative to remedy the vulnerabilities. Visual cryptography is a very secure and unique way to protect secrets. Visual cryptography is an encryption technique which is used to hide information which is present in an image. Unliketraditional cryptographic schemes, it uses human eyes to recover the secret without any complex decryption algorithms and the facilitate of computers. It is a secret sharing scheme which uses images distributed as shares such that, when the shares are superimposed, a hidden secret image is revealed.In this paper we represent various cryptography technique and research work done in this field. Keywords: Secret image sharing, cryptography, visual quality of image, pixel expansion
MEANINGFUL AND UNEXPANDED SHARES FOR VISUAL SECRET SHARING SCHEMESijiert bestjournal
In today�s internet world it is very essential to secretly share biometric data stored in the central database. There are so many options to secretly share biometri c data using cryptographic computation. This work reviews and applies a perfectly secure method to secretly share biometric data,for possible use in biometric authentication and protection based on conc ept of visual cryptography. The basic concept of proposed approach is to secretly share private imag e into two meaningful and unexpanded shares (sheets) that are stored in two separate database servers such that decryption can be performed only when both shares are simultaneously available;at the same ti me,the individual share do not open identity of the private image. Previous research,such as Arun Ross et al. in 2011,was using pixel expansion for encryption,which causes the waste of storage space and transmission time. Furthermore,some researcher such as Hou and Quan�s research in 2011,producing m eaningless shares,which causes visually revealing existence of secret image. In this work,we review visual cryptography scheme and apply them to secretly share biometric data such as fingerprint,face images for the purpose of user authentication. So,using this technique we can secretly share biometric data over internet and only authorized user can decrypt the information.
This document discusses data hiding techniques for images. It begins by introducing steganography and some common image steganography methods like LSB substitution, blocking, and palette modification. It then reviews related work on minimizing distortion in steganography, modifying matrix encoding for minimal distortion, and designing adaptive steganographic schemes. The document proposes using a universal distortion measure to evaluate embedding changes independently of the domain. It presents a system for reversible data hiding in encrypted images that partitions the image, encrypts it, hides data in the encrypted image, and allows extraction from the decrypted or encrypted image. Least significant bit substitution is discussed as an approach for hiding data in the encrypted image.
The document summarizes research on trends in selective abortions of girls in India from 1980 to 2010. It finds that selective abortion of girls has increased over the last two decades, with 4.2 to 12.1 million girls aborted over this period. Selective abortion is now common across India and is most prevalent among wealthy, educated households and families who already have one daughter.
The survey of 1940 pediatricians in India found that 61% observed more celebrations at the birth of baby boys. 52% saw that baby boys had more appropriate weight for their age. 57% clinically observed more malnutrition in girls. 41% found that parents were more inclined to breastfeed baby boys. 61% noticed that school dropouts were more common among baby girls.
This presentation outlines how the women's rights' activists in India are seeking to address the issues around declining sex ratio without compromising women's access to safe abortion services
The document discusses abortion, including what it is, different types (induced and spontaneous), methods (medical and surgical), risks, and perspectives on the issue. Abortion is a procedure to end a pregnancy using medicine or surgery. There are two main types - induced abortions are intentionally terminated, while spontaneous abortions occur unintentionally. The two methods are medical, using abortion pills, or surgical vacuum aspiration or dilation and evacuation. Risks include infection, hemorrhaging, and damage to the uterus. Views on abortion consider both pros of access and health, and cons regarding moral and social impacts.
The document summarizes various forms of discrimination and exploitation faced by women in Indian society throughout history, from female infanticide and child marriage to domestic violence, dowry system, and sati practice. It discusses the patriarchal social structure and religious customs that promoted gender inequality and treated women as inferior. The presentation aims to highlight how women have been oppressed in areas like inheritance, mobility, and work and calls for contributions to uplift women's status and bring happiness in their lives.
This document discusses abortion from several perspectives. It defines abortion and different types, including spontaneous abortion (miscarriage), therapeutic abortion, and elective abortion. It describes various medical abortion methods used at different gestational periods, and discusses the historical background of abortion practices and laws over time in various cultures and countries. The document also specifically outlines Guyana's 1995 Medical Termination of Pregnancy Act, defining key terms and outlining counseling requirements and laws regarding termination of pregnancy within different gestational period thresholds.
The document summarizes key findings from India's 2011 census. Some highlights include:
- India's population reached 1.21 billion, a 17.64% growth rate since 2001.
- Kerala had the highest literacy rate at 93.91% while Bihar had the lowest at 63.82%.
- The overall sex ratio improved to 940 females per 1000 males, though the child sex ratio declined to 914 from 927.
- Population density increased to 382 people per square kilometer from 325 in 2001.
LSTM deep learning method for network intrusion detection system IJECEIAES
The security of the network has become a primary concern for organizations. Attackers use different means to disrupt services, these various attacks push to think of a new way to block them all in one manner. In addition, these intrusions can change and penetrate the devices of security. To solve these issues, we suggest, in this paper, a new idea for Network Intrusion Detection System (NIDS) based on Long Short-Term Memory (LSTM) to recognize menaces and to obtain a long-term memory on them, in order to stop the new attacks that are like the existing ones, and at the same time, to have a single mean to block intrusions. According to the results of the experiments of detections that we have realized, the Accuracy reaches up to 99.98 % and 99.93 % for respectively the classification of two classes and several classes, also the False Positive Rate (FPR) reaches up to only 0,068 % and 0,023 % for respectively the classification of two classes and several classes, which proves that the proposed model is effective, it has a great ability to memorize and differentiate between normal traffic and attacks, and its identification is more accurate than other Machine Learning classifiers.
Intrusion detection system for imbalance ratio class using weighted XGBoost c...TELKOMNIKA JOURNAL
This document summarizes a study that proposed an intrusion detection system to address the issue of imbalanced data ratios in classification models. The study used the XGBoost classifier with a weighted approach based on the imbalance ratio of each class to improve detection performance for minority and infrequent attack types in network traffic data. The proposed system was evaluated on the BotIoT dataset and showed improved detection rates compared to other methods, particularly for underrepresented attack classes. Experimental results demonstrated that the weighted XGBoost approach effectively handled class imbalance issues.
FAST DETECTION OF DDOS ATTACKS USING NON-ADAPTIVE GROUP TESTINGIJNSA Journal
This document proposes a method for fast detection of DDoS attacks using non-adaptive group testing (NAGT). It begins with background on DDoS attacks and group testing techniques. It then describes using a strongly explicit d-disjunct matrix in NAGT to map IP addresses to "tests" performed by routers. The router counters would indicate potential hot items (attackers or victims). Two decoding algorithms are presented to identify the hot items from the test results with poly-log time complexity meeting data stream requirements. The method aims to provide early warning of DDoS attacks through efficient group testing of IP packets.
IDS IN TELECOMMUNICATION NETWORK USING PCAIJCNCJournal
This document summarizes a research paper that proposes using principal component analysis (PCA) as a dimension reduction technique for intrusion detection systems (IDS). The paper applies PCA to reduce the number of features from 41 to either 6 or 10 features for the NSL-KDD dataset. One reduced feature set is used to develop a network IDS with high detection success and rate, while the other is used for a host IDS also with good detection success and very high detection rate. The paper outlines the process of applying PCA for IDS, including performing PCA on training data to identify principal components, then using those components to map new online data and detect intrusions based on deviation thresholds.
DDOS DETECTION IN SOFTWARE-DEFINED NETWORK (SDN) USING MACHINE LEARNINGIJCI JOURNAL
In recent years, the concept of cloud computing and the software-defined network (SDN) have spread
widely. The services provided by many sectors such as medicine, education, banking, and transportation
are being replaced gradually with cloud-based applications. Consequently, the availability of these
services is critical. However, the cloud infrastructure and services are vulnerable to attackers who aim to
breach its availability. One of the major threats to any system availability is a Denial-of-Service (DoS)
attack, which is intended to deny the legitimate user from accessing cloud resources. The Distributed
Denial-of-Service attack (DDoS) is a type of DoS attack which is considerably more effective and
dangerous. A lot of efforts have been made by the research community to detect DDoS attacks, however,
there is still a need for further efforts in this germane field. In this paper, machine learning techniques are
utilized to build a model that can detect DDoS attacks in Software-Defined Networks (SDN). The used ML
algorithms have shown high performance in the earliest studies; hence they have been used in this study
along with feature selection technique. Therefore, our model utilized these algorithms to detect DDoS
attacks in network traffic. The outcome of this experiment shows the impact of feature selection in
improving the model performance. Eventually, The Random Forest classifier has achieved the highest
accuracy of 0.99 in detecting DDoS attack.
DENIAL OF SERVICE LOG ANALYSIS USING DENSITY K-MEANS METHODArdymulya Iswardani
1) The document describes a study on using density k-means clustering to analyze logs and detect denial of service attacks. Logs are clustered into three danger levels: low, medium, and high.
2) The density k-means algorithm was tested on a dataset of 11.3 million logs from a victim server under attacks. The clustering resulted in two clusters corresponding to medium and high danger levels.
3) The results were verified against original log files, which confirmed attacks on ports 21 and 443, corresponding to the detected LOIC and FTP brute force attacks.
FAST DETECTION OF DDOS ATTACKS USING NON-ADAPTIVE GROUP TESTINGIJNSA Journal
Network security has become more important role today to personal users and organizations. Denial-ofService (DoS) and Distributed Denial-of-Service (DDoS) attacks are serious problem in network. The major challenges in design of an efficient algorithm in data stream are one-pass over the input, poly-log space, poly-log update time and poly-log reporting time. In this paper, we use strongly explicit construction d-disjunct matrices in Non-adaptive group testing (NAGT) to adapt these requirements and propose a solution for fast detecting DoS and DDoS attacks based on NAGT approach.
This document discusses a novel method for intrusion awareness using Distributed Situational Awareness (D-SA). It proposes using D-SA and support vector machines (SVM) for network intrusion detection and classification. The method is evaluated using the KDD Cup 1999 intrusion detection dataset. Experimental results show the proposed D-SA method achieves higher detection rates compared to rule-based classification techniques.
An improved robust and secured image steganographic schemeiaemedu
The document summarizes an improved steganographic scheme that embeds secret data in images. It modifies an existing DCT-based scheme by embedding an "embedding map" to indicate the blocks used for concealment. The embedding map is also hidden using DWT coefficients and secured using SURF features. The proposed method aims to overcome limitations in the existing scheme like potential data loss during extraction due to changes in block energy values. Results show the scheme is robust against attacks like noise and compression while maintaining good image quality. However, capacity is still limited as only part of the image can hide the embedding map.
DETECTION OF ATTACKS IN WIRELESS NETWORKS USING DATA MINING TECHNIQUESIAEME Publication
With the progressive increase of network application and electronic devices (computer, mobile phones, android, etc), attack and intrusion detection is becoming a very challenging task in cybercrime detection area. in this context, most of existing approaches of attack detection rely mainly on a finite set of attacks. However, these solutions are vulnerable, that is, they fail in detecting some attacks when sources of information’s are ambiguous or imperfect. But, few approaches started investigating toward this direction. Following this trends, this paper investigates the role of machine learning approach (ANN, SVM) in detecting TCP connection traffic as normal or suspicious one. But, using ANN and SVM is an expensive technique individually. In this paper, combining two classifiers has been proposed, where artificial neural network (ANN) classifier and support vector machine (SVM) were employed. Additionally, our proposed solution allows visualizing obtained classification results. Accuracy of the proposed solution has been compared with other classifier results. Experiments have been conducted with different network connection selected from NSL-KDD DARPA dataset. Empirical results show that combining ANN and SVM techniques for attack detection is a promising direction
Intrusion Detection System using K-Means Clustering and SMOTEIRJET Journal
The document proposes a hybrid sampling technique combining K-Means clustering and SMOTE oversampling to address data imbalance in network intrusion detection systems. It first applies K-Means clustering to handle outliers in the data, and then uses SMOTE to generate additional samples for the minority (intrusion) class. This produces a balanced dataset for training classification models like Random Forest and CNN. The technique is evaluated on the NSL-KDD dataset and achieves accuracy above 94% with these models, outperforming an alternative approach using DBSCAN and SMOTE.
This document discusses the need for a collaborative intrusion detection system (IDS) for cloud computing. It proposes a framework where IDSs in cloud computing regions would correlate alerts from multiple detectors and exchange knowledge with each other. This could help reduce the impact of large-scale coordinated attacks by providing timely notifications about new intrusions. The document reviews related work on cloud logging and forensic analysis, and describes an architecture for a collaborative IDS framework consisting of IDS managers, dispatchers, and elementary detectors that communicate via encrypted messages.
A Collaborative Intrusion Detection System for Cloud Computingijsrd.com
Cloud computing is a computing paradigm that shifts drastically from traditional computing architecture. Although this new computing paradigm brings many advantages like utility computing model but the design in not flawless and hence suffers from not only many known computer vulnerabilities but also introduces unique information confidentiality, integrity and availability risks as well due its inherent design paradigm. To provide secure and reliable services in cloud computing environment is an important issue. To counter a variety of attacks, especially large-scale coordinated attacks, a framework of Collaborative Intrusion Detection System (IDS) is proposed. The proposed system could reduce the impact of these kinds of attacks through providing timely notifications about new intrusions to Cloud users' systems. To provide such ability, IDSs in the cloud computing regions both correlate alerts from multiple elementary detectors and exchange knowledge of interconnected Clouds with each other.
Preemptive modelling towards classifying vulnerability of DDoS attack in SDN ...IJECEIAES
Software-Defined Networking (SDN) has become an essential networking concept towards escalating the networking capabilities that are highly demanded future internet system, which is immensely distributed in nature. Owing to the novel concept in the field of network, it is still shrouded with security problems. It is also found that the Distributed Denial-of-Service (DDoS) attack is one of the prominent problems in the SDN environment. After reviewing existing research solutions towards resisting DDoS attack in SDN, it is found that still there are many open-end issues. Therefore, these issues are identified and are addressed in this paper in the form of a preemptive model of security. Different from existing approaches, this model is capable of identifying any malicious activity that leads to a DDoS attack by performing a correct classification of attack strategy using a machine learning approach. The paper also discusses the applicability of best classifiers using machine learning that is effective against DDoS attack.
Secure intrusion detection and attack measure selectionUvaraj Shan
This document proposes NICE, a framework for secure intrusion detection and attack mitigation in virtual network systems. NICE uses distributed agents on cloud servers to monitor traffic, detect vulnerabilities, and generate attack graphs. It profiles virtual machines to identify their state and vulnerabilities. When potential attacks are detected, NICE can quarantine suspicious VMs and inspect their traffic. The attack analyzer correlates alerts, constructs attack graphs, and selects appropriate countermeasures based on the graphs. Evaluations show NICE can effectively detect attacks while minimizing performance overhead for the cloud system.
Secure intrusion detection and attack measure selection in virtual network sy...Uvaraj Shan
This document proposes NICE, a framework for secure intrusion detection and attack mitigation in virtual network systems. NICE uses distributed agents on cloud servers to monitor traffic, detect vulnerabilities, and generate attack graphs. It profiles virtual machines to identify their state and vulnerabilities. When potential attacks are detected, NICE can quarantine suspicious VMs and inspect their traffic. The attack analyzer correlates alerts, constructs attack graphs, and selects appropriate countermeasures based on the graphs. Evaluations show NICE can effectively detect attacks while minimizing performance overhead for the cloud system.
Effective Multi-Stage Training Model for Edge Computing Devices in Intrusion ...IJCNCJournal
Intrusion detection poses a significant challenge within expansive and persistently interconnected environments. As malicious code continues to advance and sophisticated attack methodologies proliferate, various advanced deep learning-based detection approaches have been proposed. Nevertheless, the complexity and accuracy of intrusion detection models still need further enhancement to render them more adaptable to diverse system categories, particularly within resource-constrained devices, such as those embedded in edge computing systems. This research introduces a three-stage training paradigm, augmented by an enhanced pruning methodology and model compression techniques. The objective is to elevate the system's effectiveness, concurrently maintaining a high level of accuracy for intrusion detection. Empirical assessments conducted on the UNSW-NB15 dataset evince that this solution notably reduces the model's dimensions, while upholding accuracy levels equivalent to similar proposals.
Effective Multi-Stage Training Model for Edge Computing Devices in Intrusion ...IJCNCJournal
Intrusion detection poses a significant challenge within expansive and persistently interconnected environments. As malicious code continues to advance and sophisticated attack methodologies proliferate, various advanced deep learning-based detection approaches have been proposed. Nevertheless, the complexity and accuracy of intrusion detection models still need further enhancement to render them more adaptable to diverse system categories, particularly within resource-constrained devices, such as those embedded in edge computing systems. This research introduces a three-stage training paradigm, augmented by an enhanced pruning methodology and model compression techniques. The objective is to elevate the system's effectiveness, concurrently maintaining a high level of accuracy for intrusion detection. Empirical assessments conducted on the UNSW-NB15 dataset evince that this solution notably reduces the model's dimensions, while upholding accuracy levels equivalent to similar proposals.
Data Accuracy Models under Spatio - Temporal Correlation with Adaptive Strate...IDES Editor
Wireless sensor nodes continuously observe and
sense statistical data from the physical environment. But what
degree of accurate data sensed by the sensor nodes
collaboratively is a big issue for wireless sensor networks.
Hence in this paper, we describe accuracy models of sensor
networks for collecting accurate data from the physical
environment under two conditions. First condition: we propose
accuracy model which requires a priori knowledge of statistical
data of the physical environment called Estimated Data
Accuracy (EDA) model. Simulation results shows that EDA
model can sense more accurate data from the physical
environment than the other information accuracy models in
the network. Moreover using EDA model, there exist an
optimal set of sensor nodes which are adequate to perform
approximately the same data accuracy level achieve by the
network. Finally we simulate EDA model under the thread of
malicious attacks in the network due to extreme physical
environment. Second condition: we propose another accuracy
model using Steepest Decent method called Adaptive Data
Accuracy (ADA) model which doesn’t require any a priori
information of input signal statistics. We also show that using
ADA model, there exist an optimal set of sensor nodes which
measures accurate data and are sufficient to perform the same
data accuracy level achieve by the network. Further in ADA
model, we can reduce the amount of data transmission for
these optimal set of sensor nodes using a model called Spatio-
Temporal Data Prediction (STDP) model. STDP model
captures the spatial and temporal correlation of sensing data
to reduce the communication overhead under data reduction
strategies. Finally using STDP model, we illustrate a
mechanism to trace the malicious nodes in the network under
extreme physical environment. Computer simulations
illustrate the performance of EDA, ADA and STDP models
respectively.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Building a Raspberry Pi Robot with Dot NET 8, Blazor and SignalR - Slides Onl...Peter Gallagher
In this session delivered at Leeds IoT, I talk about how you can control a 3D printed Robot Arm with a Raspberry Pi, .NET 8, Blazor and SignalR.
I also show how you can use a Unity app on an Meta Quest 3 to control the arm VR too.
You can find the GitHub repo and workshop instructions here;
https://bit.ly/dotnetrobotgithub
Google Calendar is a versatile tool that allows users to manage their schedules and events effectively. With Google Calendar, you can create and organize calendars, set reminders for important events, and share your calendars with others. It also provides features like creating events, inviting attendees, and accessing your calendar from mobile devices. Additionally, Google Calendar allows you to embed calendars in websites or platforms like SlideShare, making it easier for others to view and interact with your schedules.
1. INT J COMPUT COMMUN, ISSN 1841-9836
8(1):70-78, February, 2013.
Detecting DDoS Attacks in Cloud Computing Environment
A.M. Lonea, D.E. Popescu, H. Tianfield
Alina Madalina Lonea
"Politehnica" University of Timisoara,
Faculty of Automation and Computers
B-dul Vasile Parvan, nr. 2, 300223, Timisoara, Romania
E-mail: madalina _ lonea@yahoo.com
Daniela Elena Popescu
University of Oradea, Faculty of Electrical Eng. and Information Tech.
Universitatii street, nr. 1, 410087, Oradea, Romania
E-mail: depopescu@uoradea.ro
Huaglory Tianfield
School of Engineering and Built Environment,
Glasgow Caledonian University
Cowcaddens Road, Glasgow G4 0BA, United Kingdom
E-mail: h.tianfield@gcu.ac.uk
Abstract:
This paper is focused on detecting and analyzing the Distributed Denial of Service
(DDoS) attacks in cloud computing environments. This type of attacks is often the
source of cloud services disruptions. Our solution is to combine the evidences obtained
from Intrusion Detection Systems (IDSs) deployed in the virtual machines (VMs) of
the cloud systems with a data fusion methodology in the front-end. Specifically, when
the attacks appear, the VM-based IDS will yield alerts, which will be stored into the
Mysql database placed within the Cloud Fusion Unit (CFU) of the front-end server.
We propose a quantitative solution for analyzing alerts generated by the IDSs, using
the Dempster-Shafer theory (DST) operations in 3-valued logic and the fault-tree
analysis (FTA) for the mentioned flooding attacks. At the last step, our solution uses
the Dempsters combination rule to fuse evidence from multiple independent sources.
Keywords: cloud computing, cloud security, Distributed Denial of Service (DDoS)
attacks, Intrusion Detection Systems, data fusion, Dempster-Shafer theory.
1 Introduction
Cloud computing technology is in continuous development and with numerous challenges
regarding security. In this context, one of the main concerns for cloud computing is represented by
the trustworthiness of cloud services. This problem requires prompt resolution because otherwise
organizations adopting cloud services would be exposed to increased expenditures while at a
greater risk. A survey conducted by International Data Corporation (IDC) in August 2008
confirms that security is the major barrier for the cloud users.
There are two things that cloud service providers should guarantee all the time: connectivity
and availability, and if there are not met, the entire organizations will suffer high costs [1].
This paper is focused on detecting and analyzing Distributed Denial of Service (DDoS) attacks
in cloud computing environment. This type of attacks is often the source of cloud services
disruptions. One of the efficient methods for detecting DDoS is to use the Intrusion Detection
Systems (IDS), in order to assure usable cloud computing services [2]. However, IDS sensors
have the limitations that they yield massive amount of alerts and produce high false positive
rates and false negative rates [3].
Copyright c⃝ 2006-2013 by CCC Publications
2. Detecting DDoS Attacks in Cloud Computing Environment 71
With regards to these IDS issues, our proposed solution aims to detect and analyze Dis-
tributed Denial of Service (DDoS) attacks in cloud computing environments, using Dempster-
Shafer Theory (DST) operations in 3-valued logic and Fault-Tree Analysis (FTA) for each VM-
based Intrusion Detection System (IDS). The basic idea is to obtain information from multiple
sensors, which are deployed and configured in each virtual machine (VM). The obtained infor-
mation is integrated in a data fusion unit, which takes the alerts from multiple heterogeneous
sources and combines them using the Dempster’s combination rule. Our approach quantitatively
represents the imprecision and efficiently utilizes it in IDS to reduce the false alarm rates.
Specifically, our solution combines the evidences obtained from Intrusion Detection Systems
(IDSs) deployed in the virtual machines (VMs) of the cloud system with a data fusion method-
ology within the front-end.
Our proposed solution can also solve the problem of analysing the logs generated by sensors,
which seems to be a big issue [4].
The remainder of this paper is organized as follows: section 2 introduces Dempster-Shafer
Theory. Section 3 presents the related work of IDS in Cloud Computing and the related work of
IDS using data fusion. Section 4 introduces the proposed solution of detecting DDoS attacks in
Cloud Computing. Finally, in section 5 the paper presents the concluding remarks.
2 Dempster-Shafer Theory (DST)
Dempster-Shafer Theory is established by two persons: Arthur Dempster, who introduced it
in the 1960’s and Glenn Shafer, who developed it in the 1970’s [5].
As an extension of Bayesian inference, Dempster-Shafer Theory (DST) of Evidence is a
powerful method in statistical inference, diagnostics, risk analysis and decision analysis. While
in the Bayesian method probabilities are assigned only for single elements of the state space
(Ω),in DST probabilities are assigned on mutually exclusive elements of the power sets of possible
states [6], [7].
According to DST method, for a given state space (Ω) the probability (called mass) is allo-
cated for the set of all possible subsets of Ω, namely 2Ω elements.
Consequently, the state space (Ω) is also called frame of discernment, whereas the assignment
procedure of probabilities is called basic probability assignment (bpa) [6], [7], [8].
We will apply the particular case of DST, i.e., the DST operations in 3-valued logic using the
fault-tree analysis (FTA), adopted by Guth (1991) and also used in Popescu, et al. (2010).
Thus, if a standard state space Ω is (True, False), then 2Ω should have 4 elements: { ϕ,
True, False, (True, False) }. The (True, False) element describes the imprecision component
introduced by DST, which refers to the fact of being either true or false, but not both. DST is a
useful method for fault-tree analysts in quantitatively representing the imprecision [8]. Another
advantage of DST is it can efficiently be utilized in IDS to reduce the false alarm rates by the
representation of ignorance [6], [7], [10].
For the reason that in DST the [sum of all masses] = 1 and m(ϕ) = 0,we have the following
relation:
m(True) + m(False) + m(True, False) = 1 (1)
In order to analyze the results of each sensor we’ll use the fault tree analysis, which can be
realized by boolean OR gate. Table 1 describes the Boolean truth table for the OR gate.
From Table 1 we have:
m(A) = (a1, a2, a3) = {m(T), m(F), m(T, F)} (2)
3. 72 A.M. Lonea, D.E. Popescu, H. Tianfield
Table 1: BOOLEAN TRUTH TABLE FOR THE OR GATE
b1 b2 b3
∨ T F (T,F)
a1 T T T T
a2 F T F (T,F)
a3 (T,F) T (T,F) (T,F)
m(B) = (b1, b2, b3) = {m(T), m(F), m(T, F)} (3)
⇒ m(A ∨ B) = (a1b1 + a1b2 + a1b3 + a2b1 + a3b1; a2b2; a2b3 + a3b2 + a3b3) (4)
m(A ∨ B) = (a1 + a2b1 + a3b1; a2b2; a2b3 + a3b2 + a3b3) (5)
At the last step, our solution applies the Dempster’s combination rule, which allows fusing
evidences from multiple independent sources using a conjunctive operation (AND) between two
bpa’s m1 and m2 , called the joint m12 [11]:
m12(A) =
∑
B
∩
C=A m1(B)m2(C)
1 − K
, (6)
when : A ̸= ϕ
m12(ϕ) = 0
and K =
∑
B
∩
C=ϕ m1(B)m2(C)
The factor 1-K, called normalization factor, is constructive for entirely avoiding the conflict
evidence.
Data fusion is also applied in real world examples: robotics, manufacturing, remote sensing
and medical diagnosis, as well in military threat assessment and weather forecast systems [12].
Sentz and Ferson (2002) demonstrated in their study that Dempster’s combination rule is
suitable for the case that the sources of evidences are reliable and a minimal conflict or irrelevant
conflict is generated.
3 Related Work
3.1 Intrusion Detection Systems (IDS) in Cloud Computing
One of the IDS strategies proved reliable in cloud computing environments is its applicability
to each virtual machine. This is the method we’ll choose for our proposed solution. Mazzariello,
et al. (2010) presented and evaluated this method in comparison with another IDS deployment
strategy, which uses single IDS near the cluster controller. IDS applied to each virtual machine
in cloud computing platform eliminates the overloading problem, because in a way the network
traffic is split to all IDSs. Thus, applying IDS to each virtual machine gets rid of the issue of the
IDS strategy near the cluster controller, which tends to be overloaded because of its necessity to
monitor all the supposed traffic from the cloud computing infrastructure. Another advantage of
this strategy as described by Roschke, et al. (2009) is the benefit of reducing the impact of the
possible attacks by the IDS Sensor VMs.
However, the limitation of IDS strategy applied to each virtual machine is the missing of the
correlation phase, which is suggested in the future work by Mazzariello, et al. (2010).
4. Detecting DDoS Attacks in Cloud Computing Environment 73
The correlation phase will be included in our proposed solution, because beside the IDS for
each virtual machine, our IDS cloud topology will include a Cloud Fusion Unit (CFU) on the
front-end, with the purpose of obtaining and controlling the alerts received from the IDS sensor
VMs as presented by Roschke, et al. (2009) in their theoretical IDS architecture for cloud, which
utilizing an IDS Management Unit.
Compared to Roschke, et al. (2009) who suggested the utilization of IDMEF (Intrusion
Detection Message Exchange) standard, a useful component for storage and exchange of the
alerts from the management unit, the alerts in our proposed solution will be stored into the
Mysql database of Cloud Fusion Unit. The Cloud Fusion Unit will add the capacity to analyze
the results using the Dempster-Shafer theory (DST) of evidence in 3-valued logic and the Fault-
Tree Analysis for the IDS of each virtual machine and at the end the results of the sensors will
be fused using Dempster’s combination rule.
A similar method of using a IDS Management Unit is proposed in Dhage, et al. (2011),
who presented a theoretical model of an IDS model in cloud computing, by using a single IDS
controller, which creates a single mini IDS instance for each user. This IDS instance can be
used in multiple Node controllers and a node controller can contain IDS instances of multiple
users. The analysis phase of the mini IDS instance for each user takes place in the IDS controller.
Compared with Roschke, et al. (2009) where the emphasis is on how to realize the synchronization
and integration of the IDS Sensor VMs, in Dhage, et al. (2011) the focus is to provide a clear
understanding of the cardinality used in the basic architecture of IDS in cloud infrastructure.
Applying the IDS for each virtual machine is an idea suggested also by Lee, et al. (2011), who
increases the effectiveness of IDS by assigning a multi-level intrusion detection system and the log
management analysis in cloud computing. In this sense the users will receive appropriate level
of security, which will be emphasized on the degree of the IDS applied to the virtual machine,
and as well on the prioritization stage of the log analysis documents. This multi-level security
model solves the issue of using effective resources.
Lo, et al. (2010) proposed a cooperative IDS system for detecting the DoS attacks in Cloud
Computing networks, which has the advantage of preventing the system from single point of
failure attack, even if it is a slower IDS solution than a pure Snort based IDS. Thus, the framework
proposed by Lo, et al. (2010) is a distributed IDS system, where each IDS is composed of three
additional modules: block, communication and cooperation, which are added into the Snort IDS
system.
3.2 IDS using Dempster-Shafer theory
Dempster-Shafer Theory (DST) is an effective solution for assessing the likelihood of DDoS
attacks, which was demonstrated by several research papers in the context of network intrusion
detection systems. Dissanayake (2008) presented a survey upon intrusion detection using DST.
Our study is to detect DDoS attacks in cloud computing environments. Dempster-Shafer
Theory (DST) is used to analyze the results received from each sensor (i.e. VM-based IDS).
Data used in experiments using DST vary: Yu and Frincke (2005) used DARPA DDoS
intrusion detection evaluation datasets, Chou et al. (2008) used DARPA KDD99 intrusion
detection evaluation dataset, Chen and Aickelin (2006) used the Wisconsin Breast cancer dataset
and IRIS plant data, while others scientists generated their own data [7]. The data to be used in
our proposed solution will be generated by ourselves, by performing DDoS attacks using specific
tools against the VM-based IDS.
Siaterlis, et al. (2003) and Siaterlis and Maglaris (2005) performed a similar study of detecting
DDoS using data fusion and their field was an operational university campus network, while in
our solution the DDoS attacks are proposed to be detected and analyzed in our private cloud
5. 74 A.M. Lonea, D.E. Popescu, H. Tianfield
computing environment.
Additionally, we consider to analyze the attacks generated against the TCP, UDP, ICMP
packets, like Siaterlis, et al. (2003) and Siaterlis and Maglaris (2005). However, instead of
applying DST on the state space Ω = {Normal, UDP − flood, SY N − flood, ICMP − flood},
our study uses DST operations in 3-valued logic as suggested by Guth (1991) for the same
flooding attacks: TCP-flood, UDP-flood, ICMP-flood, for each VM-based IDS. Like Siaterlis
and Maglaris (2005), Chatzigiannakis, et al., (2007) chosen the same frame of discernment, while
Hu, et al. (2006) used a state space: {Normal, TCP, UDP and ICMP}.
Furthermore, compared with the study performed by Siaterlis, at al. (2003) and Siaterlis and
Maglaris (2005), who use a minimal neural network at the sensor level, our proposed solution will
assign the probabilities using: DST in 3-valued logic, the pseudocode and the fault tree analysis.
Whilst the computational complexity of DST is increasing exponentially with the number of
elements in the frame of discernment [12], the DST 3-valued logic proposed to be used in our
research will not encounter this issue, which will meet the efficiency requirements in terms of
both detection rate and computation time [15].
Finally, the data fusion of the evidences obtained from sensors studied by Siaterlis and
Maglaris (2005) will be used in our study. The data fusion will be realized using the Dempster-
Shafer combination rule, which was demonstrated in Siaterlis and Maglaris (2005) for its ad-
vantages, i.e., maximization of DDoS true positive rates and minimization of the false positive
alarm rate, by combining the evidence received from sensors. Therefore, the work of cloud
administrators will be alleviated, whereas the number of alerts will decrease.
4 Proposed Solution
In order to detect and analyze Distributed Denial of Service (DDoS) attacks in cloud com-
puting environments we propose a solution as presented in Figure 1. For illustration purpose, a
private cloud with a front-end and three nodes is set up. Whilst the detection stage is executed
within the nodes, more precisely inside the virtual machines (VMs), where the Intrusion Detec-
tion Systems (IDSs) are installed and configured; the attacks assessment phase is handled inside
the front-end server, in the Cloud Fusion Unit (CFU).
The first step in our solution includes the deployment stage of a private cloud using Euca-
lyptus open-source version 2.0.3. The topology of the implemented private cloud is: a front-end
(with Cloud Controller, Walrus, Cluster Controller, Storage Controller) and a back-end (i.e.
three nodes). The Managed networking mode is chosen because of the advanced features that it
provides and Xen hypervisor is used for virtualization.
Then, the VM-based IDS are created, by installing and configuring Snort into each VM. The
reason of using this IDS location is because the overloading problems can be avoided and the
impact of possible attacks can be reduced [2], [13].
These IDSs will yield alerts, which will be stored into the Mysql database placed within the
Cloud Fusion Unit (CFU) of the front-end server. A single database is suggested to be used
in order to reduce the risk of losing data, to maximize the resource usage inside the VMs and
to simplify the work of cloud administrator, who will have all the alerts situated in the same
place. A similar idea of obtaining and controlling the alerts received from the IDS Sensor VMs
using an IDS Management Unit was presented by Roschke, et al. (2009) as a theoretical IDS
architecture for cloud. A similar method of using an IDS Management Unit is proposed in
Dhage, et al. (2011). However, our solution adds the capacity to analyse the results using the
Dempster-Shafer theory of evidence in 3-valued logic.
As showed in Figure 1, the Cloud Fusion Unit (CFU) comprises 3 components: Mysql
database, bpas calculation and attacks assessment.
6. Detecting DDoS Attacks in Cloud Computing Environment 75
Figure 1: IDS Cloud Topology
I. Mysql database
The Mysql database is introduced with the purpose of storing the alerts received from the
VM-based IDS. Furthermore, these alerts will be converted into Basic Probabilities Assignments
(bpas), which will be calculated using the pseudocode below.
II. Basic probabilities assignment (bpa’s) calculation
For calculating the basic probabilities assignment, first we decide on the state space Ω. In this
paper we use DST operations in 3-valued logic {True, False, (True, False)} Guth (1991) for the
following flooding attacks: TCP-flood, UDP-flood, ICMP-flood, for each VM-based IDS. Thus,
the analyzed packets will be: TCP, UDP and ICMP. Further, a pseudocode for converting the
alerts received from the VM-based IDS into bpas is provided. The purpose of this pseudocode
is to obtain the following probabilities of the alerts received from each VM-based IDS:
(mUDP (T), mUDP (F), mUDP (T, F))
(mTCP (T), mTCP (F), mTCP (T, F))
(mICMP (T), mICMP (F), mICMP (T, F))
7. 76 A.M. Lonea, D.E. Popescu, H. Tianfield
Figure 2: BPA’s calculation
Pseudocode for converting the alerts into bpa’s:
For each node
Begin
For each X ∈ {UDP; TCP; ICMP}:
Begin
1: Query the alerts from the database when a X attack occurs for the specified hostname
2: Query the total number of possible X alerts for each hostname
3: Query the alerts from the database when X attack is unknown
4: Calculate the Belief (True) for X, by dividing the result obtained at step 1 with the result
obtained at step 2
5: Calculate the Belief (True, False) for X, by dividing the result obtained at step 3 with the
result obtained at step 2
6: Calculates Belief (False) for X: 1- Belief (True) - Belief (True, False)
end
end
Furthermore, after obtaining the probabilities for each attack packet (i.e. UDP, TCP, ICMP)
for each VM-based IDS, the probabilities for each VM-based IDS should be calculated following
the fault-tree as shows in Figure 2. Figure 2 reveals only the calculation of the probabilities (i.e.
mS1(T), mS1(F), mS1(T, F)) for the first VM-based IDS.
Thus, using the DST with fault-tree analysis we can calculate the belief (Bel) and plausibility
(Pl) values for each VM-based IDS:
Bel(S1) = mS1(T) (7)
Pl(S1) = mS1(T) + mS1(T, F) (8)
III. Attacks assessment
The attacks assessment consists of data fusion of the evidences obtained from sensors by
using the Dempster’s combination rule, with the purpose of maximizing the DDoS true positive
rates and minimizing the false positive alarm rate. mS1,S2(T) can be calculated using Table 2
and equation (6).
8. Detecting DDoS Attacks in Cloud Computing Environment 77
Table 2: BOOLEAN TRUTH TABLE FOR THE OR GATE
mS1(T) mS1(F) mS1(T,F)
mS2(T) mS1(T) mS2(T) mS1(F) mS2(T) mS1(T,F) mS2(T)
mS2(F) mS1(T) mS2(F) mS1(F) mS2(F) mS1(T,F) mS2(F)
mS2(T,F) mS1(T) mS2(T,F) mS1(F) mS2(T,F) mS1(T,F) mS2(T,F)
5 Conclusions
To detect and analyze Distributed Denial of Service (DDoS) attacks in cloud computing
environments we have proposed a solution using Dempster-Shafer Theory (DST) operations in
3-valued logic and the Fault-Tree Analysis (FTA) for each VM-based Intrusion Detection System
(IDS). Our solution quantitatively represents the imprecision and efficiently utilizes it in IDS to
reduce the false alarm rates by the representation of the ignorance.
Whilst the computational complexity of DST is increasing exponentially with the number of
elements in the frame of discernment [12], the DST 3-valued logic in our solution does not have
this issue, which meets the efficiency requirements in terms of both detection rate and computa-
tion time. At the same time, the usability requirement has been accomplished, because the work
of cloud administrators will be alleviated by using the Dempster rule of evidence combination
whereas the number of alerts will decrease and the conflict generated by the combination of
information provided by multiple sensors is entirely eliminated.
To sum up, by using DST our proposed solution has the following advantages: to accom-
modate the uncertain state, to reduce the false negative rates, to increase the detection rate, to
resolve the conflicts generated by the combination of information provided by multiple sensors
and to alleviate the work for cloud administrators.
Acknowledgment
This work was partially supported by the strategic grant POSDRU/88/1.5/S/50783, Project
ID50783 (2009), co-financed by the European Social Fund - Investing in People, within the
Sectoral Operational Programme Human Resources Development 2007-2013.
Bibliography
[1] Perry, G., Minimizing public cloud disruptions, TechTarget, [online]. Available at:
http://searchdatacenter.techtarget.com/tip/Minimizing-public-cloud-disruptions, 2011.
[2] Roschke, S., Cheng, F. and Meinel, C.,Intrusion Detection in the Cloud. In Eighth IEEE
International Conference on Dependable, Autonomic and Secure Computing, pp. 729-734,
2009.
[3] Yu, D. and Frincke, D.,A Novel Framework for Alert Correlation and Understanding. In-
ternational Conference on Applied Cryptography and Network Security (ACNS) 2004,
Springer’s LNCS series, 3089, pp. 452-466, 2004.
[4] Lee, J-H., Park, M-W., Eom, J-H. And Chung, T-M., Multi-level Intrusion Detection System
and Log Management in Cloud Computing. In 13th International Conference on Advanced
Communication Technology (ICACT) ICACT 2011, Seoul, 13- 16 February, pp.552- 555,
2011.
[5] Chen, Q. and Aickelin, U., Dempster-Shafer for Anomaly Detection. In Proceedings of the
International Conference on Data Mining (DMIN 2006), Las Vegas, USA, pp. 232-238, 2006.
9. 78 A.M. Lonea, D.E. Popescu, H. Tianfield
[6] Siaterlis, C., Maglaris, B. and Roris, P., A novel approach for a Distributed Denial of Service
Detection Engine. National Technical University of Athens. Athens, Greece, 2003.
[7] Siaterlis, C. And Maglaris, B., One step ahead to Multisensor Data Fusion for DDoS De-
tection. Journal of Computer Security, 13(5):779-806, 2005.
[8] Guth, M.A.S., A Probabilistic Foundation for Vagueness & Imprecision in Fault-Tree Anal-
ysis. IEEE Transactions on Reliability, 40(5), pp.563-569, 1991.
[9] Popescu D.E., Lonea A.M., Zmaranda D.,Vancea C. and Tiurbe C. , Some Aspects about
Vagueness & Imprecision in Computer Network Fault-Tree Analysis. INT J COMPUT
COMMUN, ISSN: 1841-9836, 5(4):558-566, 2010.
[10] Esmaili, M., Dempster-Shafer Theory and Network Intrusion Detection Systems. Scientia
Iranica, Vol. 3, No. 4, Sharif University of Technology, 1997.
[11] Sentz, K. and Ferson, S., Combination of Evidence in Dempster-Shafer Theory. Sandia
National Laboratories, Sandia Report, 2002.
[12] Dissanayake, A., Intrusion Detection Using the Dempster-Shafer Theory. 60-510 Literature
Review and Survey, School of Computer Science, University of Windsor, 2008.
[13] Mazzariello, C., Bifulco, R. and Canonico, R., Integrating a Network IDS into an Open
Source Cloud Computing Environment. In Sixth International Conference on Information
Assurance and Security, pp. 265-270, 2010.
[14] Dhage, S. N., et al., Intrusion Detection System in Cloud Computing Environment. In In-
ternational Conference and Workshop on Emerging Trends in Technology (ICWET 2011) ’
TCET, Mumbai, India, pp. 235-239, 2011.
[15] Lo, C-C. , Huang, C-C. And Ku, J., A Cooperative Intrusion Detection System Framework
for Cloud Computing Networks. In 39th International Conference on Parallel Processing
Workshops, pp.280-284, 2010.
[16] Yu, D. and Frincke, D., Alert Confidence Fusion in Intrusion Detection Systems with Ex-
tended Dempster-Shafer Theory. ACM-SE 43: Proceedings of the 43rd ACM Southeast Con-
ference, pp. 142-147, 2005.
[17] Chou, T., Yen, K.K., Luo, J., Network intrusion detection design using feature selection of
soft computing paradigms. International Journal of Computational Intelligence, 4(3):102-
105, 2008.
[18] Chatzigiannakis, V., et al., Data fusion algorithms for network anomaly detection: classi-
fication and evaluation. Proceedings of the Third International Conference on Networking
and Services (ICNS’07), 2007.
[19] Hu, W., Li, J. and Gao, Q., Intrusion Detection Engine Based on Dempster-Shafer’s The-
ory of Evidence. Communications, Circuits and Systems Proceedings, 2006 International
Conference, 3:1627-1631, 2006.