This is report format on CSC 347 – Computer Hardware and Maintenance. It is for IUBAT university but as per I assume every University can use this format.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Fuzzy clustering Approach in segmentation of T1-T2 brain MRIIDES Editor
Segmentation is a difficult and challenging
problem in the magnetic resonance images, and it
considered as important in computer vision and artificial
intelligence. Many researchers have applied various
techniques however fuzzy c-means (FCM) based
algorithms is more effective compared to other methods.
In this paper, we present a novel FCM algorithm for
weighted bias (also called intensity in-homogeneities)
estimation and segmentation of MRI. Normally, the
intensity inhomogeneities are attributed to imperfections
in the radio-frequency coils or to the problems associated
with the image acquisition. Our algorithm is formulated
by modifying the objective function of the standard FCM
and it has the advantage that it can be applied at an early
stage in an automated data analysis. Further this paper
proposes a center knowledge method in order to reduce
the running time of proposed algorithm. The proposed
method can deal with the intensity in-homogeneities and
image noise effectively. We have compared our results
with other reported methods. The results using real MRI
data show that our method provides better results
compared to standard FCM based algorithms and other
modified FCM-based techniques.
Hybrid compression based stationary wavelet transformsOmar Ghazi
This document presents a hybrid compression approach for images that uses Stationary Wavelet Transforms (SWT), Back Propagation Neural Network (BPNN), and Lempel-Ziv-Welch (LZW) compression. The approach involves: 1) preprocessing the image, 2) applying SWT, 3) converting to a 1D vector using zigzag scan, and 4) hybrid compression using BPNN vector quantization and LZW lossless compression. Experimental results show the SWT with BPNN and LZW achieves the highest compression ratios but the longest processing time, while SWT with Run Length encoding has a lower ratio but shorter time. The hybrid approach combines lossy and lossless compression techniques to obtain a
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGijdkp
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time
grows as the size and complexity of the training dataset increases. For this reason, it is important to have
efficient computational methods and algorithms that can be applied on large datasets, such that it is still
possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper
a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model
(HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances
with similar inputs are clustered and a weight factor which represents the frequency of these instances is
assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity
function to cluster similar examples. In the second step, all formulas in the classical HMM training
algorithm (EM) associated with the number of training instances are modified to include the weight factor
in appropriate terms. This process significantly accelerates HMM training while maintaining the same
initial, transition and emission probabilities matrixes as those obtained with the classical HMM training
algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set,
speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach
is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...csandit
Single-channel speech intelligibility enhancement is much more difficult than multi-channel
intelligibility enhancement. It has recently been reported that machine learning training-based
single-channel speech intelligibility enhancement algorithms perform better than traditional
algorithms. In this paper, the performance of a deep neural network method using a multiresolution
cochlea-gram feature set recently proposed to perform single-channel speech
intelligibility enhancement processing is evaluated. Various conditions such as different
speakers for training and testing as well as different noise conditions are tested. Simulations
and objective test results show that the method performs better than another deep neural
networks setup recently proposed for the same task, and leads to a more robust convergence
compared to a recently proposed Gaussian mixture model approach.
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Fuzzy clustering Approach in segmentation of T1-T2 brain MRIIDES Editor
Segmentation is a difficult and challenging
problem in the magnetic resonance images, and it
considered as important in computer vision and artificial
intelligence. Many researchers have applied various
techniques however fuzzy c-means (FCM) based
algorithms is more effective compared to other methods.
In this paper, we present a novel FCM algorithm for
weighted bias (also called intensity in-homogeneities)
estimation and segmentation of MRI. Normally, the
intensity inhomogeneities are attributed to imperfections
in the radio-frequency coils or to the problems associated
with the image acquisition. Our algorithm is formulated
by modifying the objective function of the standard FCM
and it has the advantage that it can be applied at an early
stage in an automated data analysis. Further this paper
proposes a center knowledge method in order to reduce
the running time of proposed algorithm. The proposed
method can deal with the intensity in-homogeneities and
image noise effectively. We have compared our results
with other reported methods. The results using real MRI
data show that our method provides better results
compared to standard FCM based algorithms and other
modified FCM-based techniques.
Hybrid compression based stationary wavelet transformsOmar Ghazi
This document presents a hybrid compression approach for images that uses Stationary Wavelet Transforms (SWT), Back Propagation Neural Network (BPNN), and Lempel-Ziv-Welch (LZW) compression. The approach involves: 1) preprocessing the image, 2) applying SWT, 3) converting to a 1D vector using zigzag scan, and 4) hybrid compression using BPNN vector quantization and LZW lossless compression. Experimental results show the SWT with BPNN and LZW achieves the highest compression ratios but the longest processing time, while SWT with Run Length encoding has a lower ratio but shorter time. The hybrid approach combines lossy and lossless compression techniques to obtain a
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGijdkp
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time
grows as the size and complexity of the training dataset increases. For this reason, it is important to have
efficient computational methods and algorithms that can be applied on large datasets, such that it is still
possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper
a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model
(HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances
with similar inputs are clustered and a weight factor which represents the frequency of these instances is
assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity
function to cluster similar examples. In the second step, all formulas in the classical HMM training
algorithm (EM) associated with the number of training instances are modified to include the weight factor
in appropriate terms. This process significantly accelerates HMM training while maintaining the same
initial, transition and emission probabilities matrixes as those obtained with the classical HMM training
algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set,
speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach
is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...csandit
Single-channel speech intelligibility enhancement is much more difficult than multi-channel
intelligibility enhancement. It has recently been reported that machine learning training-based
single-channel speech intelligibility enhancement algorithms perform better than traditional
algorithms. In this paper, the performance of a deep neural network method using a multiresolution
cochlea-gram feature set recently proposed to perform single-channel speech
intelligibility enhancement processing is evaluated. Various conditions such as different
speakers for training and testing as well as different noise conditions are tested. Simulations
and objective test results show that the method performs better than another deep neural
networks setup recently proposed for the same task, and leads to a more robust convergence
compared to a recently proposed Gaussian mixture model approach.
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
Chaotic Block Image Scheme using Large Key Space and Message Digest AlgorithmCSCJournals
In this paper, chaotic block image scheme using large key space and message digest algorithm. Cat map intended for confusion and 2D-Sine Tent Composite map (2D-STCM) key generator intended for diffusion. Confusion is implemented by 2D Cat map with arbitrary block size. In the first tendency, 2D cat map use for local shuffling of indexes inside blocks, while in the second tendency, 2D cat map used for global shuffling of whole image indexes. The designed algorithm executes two confusions and one diffusion in each iteration. To increase the security level, the message digestion algorithm is used as a fingerprint for the plain image that creates the initial value of the key. After that 2D-STCM generates a large key stream. Diffusion implementation takes place by XOR operation; between a key stream and confused image. Experimental results, show that security level increases due to integration of confusion and diffusion. On the other side large key space and the high sensitivity of secret keys have been given a guarantee for the performance of the security. Performance measures reach to the top value among those in the similar researches. To verify the obtained results, authors implemented inverse chaos. All the tests are processed by MATLAB 2015a.
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPSIJNSA Journal
This document proposes an image encryption algorithm that uses diffusion and multiple chaotic maps. It begins by generating subkeys using chaotic logistic maps. The image is then encrypted using one subkey via logistic map transformation, diffusing the image. Additional subkeys are generated from four chaotic maps by hopping through various map orbits. The image is treated as a 1D array via raster and zigzag scanning, divided into blocks, and those blocks undergo position permutation and value transformation controlled by the chaotic subkeys, fully encrypting the image. Decryption reverses the process using the same subkeys.
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...IOSR Journals
This document discusses image segmentation techniques using clustering algorithms. It introduces Fuzzy C-Means (FCM) clustering, which allows data points to belong to multiple clusters with varying degrees of membership. However, FCM does not work well on noisy or non-linearly separable data. The document proposes the Kernel Fuzzy C-Means (KFCM) algorithm, which uses a kernel function to map data to a higher dimensional space, making separation easier. While improving results for noisy images, KFCM does not consider neighboring pixels. Finally, the document introduces the Novel Modified Kernel Fuzzy C-Means (NMKFCM) algorithm, which incorporates neighborhood information into the objective function to further improve segmentation accuracy, especially for noisy images
IRJET- Chatbot Using Gated End-to-End Memory NetworksIRJET Journal
The document describes a proposed chatbot system that uses a gated end-to-end memory network model for hospital appointment booking. The model is trained on dialog data consisting of user utterances and bot responses related to booking appointments. It uses an attention mechanism over the dialog memory to select relevant parts of the conversation. The model is trained end-to-end to dynamically regulate interactions with the memory. Experiments show it can handle new combinations of fields when booking appointments in a simulated hospital reservation scenario.
This document discusses classifying network traffic flows into quality of service (QoS) classes using unsupervised machine learning and K-nearest neighbor clustering. It first reviews previous work in traffic classification. It then uses self-organizing maps and K-means clustering as unsupervised methods to identify three inherent traffic classes - transactional, bulk data transfer, and interactive applications. The K-nearest neighbor classifier is then evaluated and found to have a low error rate of around 2% for test data, significantly better than a minimum mean distance classifier with 7% error.
Semantic Video Segmentation with Using Ensemble of Particular Classifiers and...ITIIIndustries
A new approach based on the use of a deep neural network and an ensemble of particular classifiers is proposed. This approach is based on use of the novel block of fuzzy generalization for combines classes of objects into semantic groups, each of which corresponds to one or more particular classifiers. As result of processing, the sequence of frames is converted into the annotation of the event occurring in the video for a certain time interval
SYNTHETICAL ENLARGEMENT OF MFCC BASED TRAINING SETS FOR EMOTION RECOGNITIONcsandit
1) The document proposes a method to synthetically enlarge training sets for emotion recognition systems by modifying Mel Frequency Cepstral Coefficients (MFCCs).
2) It applies pitch shifting to MFCC features by scaling their frequency values, which allows new patterns to be generated without changing emotional content.
3) Experimental results on a speech emotion database show the proposed MFCC modification approach reduces test error rates compared to using the original training set, demonstrating it effectively increases generalization of the emotion recognition system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
On the-joint-optimization-of-performance-and-power-consumption-in-data-centersCemal Ardil
The document summarizes research on jointly optimizing performance and power consumption in data centers. It models the process of mapping tasks in a data center onto machines as a multi-objective problem to minimize both energy consumption and response time (makespan), subject to deadline and architectural constraints. It proposes using a simple goal programming technique that guarantees Pareto optimal solutions with good convergence. Simulation results show the technique achieves superior performance compared to other approaches and is competitive with optimal solutions for small-scale problems.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Fixed-Point Code Synthesis for Neural Networksgerogepatton
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
1. The document proposes a Modified Fuzzy C-Means (MFCM) algorithm to segment brain tumors in noisy MRI images.
2. The conventional Fuzzy C-Means algorithm is sensitive to noise, so the MFCM adds an adaptive filtering step during segmentation.
3. The MFCM incorporates neighboring pixel membership values to reduce each pixel's resistance to being clustered, improving segmentation in noisy images.
Highly Parallel Pipelined VLSI Implementation of Lifting Based 2D Discrete Wa...idescitation
The lifting scheme based Discrete Wavelet
Transform is a powerful tool for image processing
applications. The lack of disk space during transmission and
storage of images pushes the demand for high speed
implementation of efficient compression technique. This paper
proposes a highly pipelined and distributed VLSI architecture
of lifting based 2D DWT with lifting coefficients represented
in fixed point [2:14] format. Compared to conventional
architectures [11], [13]-[16], the proposed highly pipelined
architecture optimizes the design which increases
significantly the performance speed. The design raises the
operating frequency, at the expense of more hardware area.
In this paper, initially a software model of the proposed design
was developed using MATLAB ®. Corresponding to this
software model, an efficient highly parallel pipelined
architecture was designed and developed using verilog HDL
language and implemented in VIRTEX ® 6 (XC6VHX380T)
FPGA. Also the design was synthesized on TSMC 0.18μm
ASIC Library by using Synopsis Design Compiler. The entire
system is suitable for several real time applications.
An approach for color image compression of bmp and tiff images using dct and dwtIAEME Publication
This document summarizes a research paper that compares image compression using the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). It finds that DWT performs better than DCT in terms of Mean Square Error and Peak Signal to Noise Ratio. The paper analyzes compression of BMP and TIFF color images using DCT and DWT. It converts color images to grayscale, then compresses the grayscale images using DWT. DWT decomposes images into different frequency components and scales, allowing for better image compression compared to DCT.
The document discusses the chimera grid method for computational fluid dynamics simulations of complex geometries. It has two main elements: (1) decomposition of the computational domain into sub-domains that are each gridded independently, and (2) communication of solution data between sub-domains through interpolation. Overlapping grids allow each sub-domain to be gridded with structured grids while handling interfaces through hole and outer boundaries. The chimera grid method makes it possible to model problems with complex geometries using easier-to-generate body-fitted grids. It has been used successfully for simulations of configurations like the integrated space shuttle.
This document discusses techniques for representing digital circuit partitioning problems using graph representations. It presents three encoding techniques to map graph partitions to the problem domain: 1) a binary string where each bit represents a cell and its partition, 2) a string with two regions to represent vertices and edge crossings, and 3) a string with regions for vertices and edges. The techniques are evaluated in terms of suitability, with the second approach more suitable for dense circuits. Net cut evaluation is also described to analyze partitioning solutions.
Survey on clustering based color image segmentation and novel approaches to f...eSAT Journals
Abstract Segmentation is an important image processing technique that helps to analyze an image automatically. Applications involving detection or recognition of objects in images often include segmentation process. This paper describes two unsupervised clustering based color image segmentation techniques namely K-means clustering and Fuzzy C-means (FCM) clustering. The advantages and disadvantages of both K-means and Fuzzy C-means algorithm are also presented in this paper. K-means algorithm takes less computation time as compared to Fuzzy C-means algorithm which produces result close to that of K-means. On the other hand in FCM algorithm each pixel of an image can have membership to more than one cluster which is not in case of K-means algorithm, an advantage to FCM method. Color images contain wide variety of information and are more complicated than gray scale images. In image processing, though color image segmentation is a challenging task but provides a path for image analysis in practical application fields. Secondly some novel approaches to FCM algorithm for better image segmentation are also discussed such as SFCM (Spatial FCM) and THFCM (Thresholding FCM). Basic FCM algorithm does not take into consideration the spatial information of the image. SFCM specially focus on spatial details and contribute towards image segmentation results for image analysis. It introduces spatial function into FCM algorithm membership function and then operates with available spatial information. THFCM is another approach that focus on thresholding technique for image segmentation. It main task is to find a discerner cluster that will act as automatic threshold. These two approaches shows how better segmentation results can be obtained.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...CSCJournals
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
Machine Learning Algorithms for Image Classification of Hand Digits and Face ...IRJET Journal
This document discusses machine learning algorithms for image classification using five different classification schemes. It summarizes the mathematical models behind each classification algorithm, including Nearest Class Centroid classifier, Nearest Sub-Class Centroid classifier, k-Nearest Neighbor classifier, Perceptron trained using Backpropagation, and Perceptron trained using Mean Squared Error. It also describes two datasets used in the experiments - the MNIST dataset of handwritten digits and the ORL face recognition dataset. The performance of the five classification schemes are compared on these datasets.
Chaotic Block Image Scheme using Large Key Space and Message Digest AlgorithmCSCJournals
In this paper, chaotic block image scheme using large key space and message digest algorithm. Cat map intended for confusion and 2D-Sine Tent Composite map (2D-STCM) key generator intended for diffusion. Confusion is implemented by 2D Cat map with arbitrary block size. In the first tendency, 2D cat map use for local shuffling of indexes inside blocks, while in the second tendency, 2D cat map used for global shuffling of whole image indexes. The designed algorithm executes two confusions and one diffusion in each iteration. To increase the security level, the message digestion algorithm is used as a fingerprint for the plain image that creates the initial value of the key. After that 2D-STCM generates a large key stream. Diffusion implementation takes place by XOR operation; between a key stream and confused image. Experimental results, show that security level increases due to integration of confusion and diffusion. On the other side large key space and the high sensitivity of secret keys have been given a guarantee for the performance of the security. Performance measures reach to the top value among those in the similar researches. To verify the obtained results, authors implemented inverse chaos. All the tests are processed by MATLAB 2015a.
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPSIJNSA Journal
This document proposes an image encryption algorithm that uses diffusion and multiple chaotic maps. It begins by generating subkeys using chaotic logistic maps. The image is then encrypted using one subkey via logistic map transformation, diffusing the image. Additional subkeys are generated from four chaotic maps by hopping through various map orbits. The image is treated as a 1D array via raster and zigzag scanning, divided into blocks, and those blocks undergo position permutation and value transformation controlled by the chaotic subkeys, fully encrypting the image. Decryption reverses the process using the same subkeys.
Automatic Determination Number of Cluster for NMKFC-Means Algorithms on Image...IOSR Journals
This document discusses image segmentation techniques using clustering algorithms. It introduces Fuzzy C-Means (FCM) clustering, which allows data points to belong to multiple clusters with varying degrees of membership. However, FCM does not work well on noisy or non-linearly separable data. The document proposes the Kernel Fuzzy C-Means (KFCM) algorithm, which uses a kernel function to map data to a higher dimensional space, making separation easier. While improving results for noisy images, KFCM does not consider neighboring pixels. Finally, the document introduces the Novel Modified Kernel Fuzzy C-Means (NMKFCM) algorithm, which incorporates neighborhood information into the objective function to further improve segmentation accuracy, especially for noisy images
IRJET- Chatbot Using Gated End-to-End Memory NetworksIRJET Journal
The document describes a proposed chatbot system that uses a gated end-to-end memory network model for hospital appointment booking. The model is trained on dialog data consisting of user utterances and bot responses related to booking appointments. It uses an attention mechanism over the dialog memory to select relevant parts of the conversation. The model is trained end-to-end to dynamically regulate interactions with the memory. Experiments show it can handle new combinations of fields when booking appointments in a simulated hospital reservation scenario.
This document discusses classifying network traffic flows into quality of service (QoS) classes using unsupervised machine learning and K-nearest neighbor clustering. It first reviews previous work in traffic classification. It then uses self-organizing maps and K-means clustering as unsupervised methods to identify three inherent traffic classes - transactional, bulk data transfer, and interactive applications. The K-nearest neighbor classifier is then evaluated and found to have a low error rate of around 2% for test data, significantly better than a minimum mean distance classifier with 7% error.
Semantic Video Segmentation with Using Ensemble of Particular Classifiers and...ITIIIndustries
A new approach based on the use of a deep neural network and an ensemble of particular classifiers is proposed. This approach is based on use of the novel block of fuzzy generalization for combines classes of objects into semantic groups, each of which corresponds to one or more particular classifiers. As result of processing, the sequence of frames is converted into the annotation of the event occurring in the video for a certain time interval
SYNTHETICAL ENLARGEMENT OF MFCC BASED TRAINING SETS FOR EMOTION RECOGNITIONcsandit
1) The document proposes a method to synthetically enlarge training sets for emotion recognition systems by modifying Mel Frequency Cepstral Coefficients (MFCCs).
2) It applies pitch shifting to MFCC features by scaling their frequency values, which allows new patterns to be generated without changing emotional content.
3) Experimental results on a speech emotion database show the proposed MFCC modification approach reduces test error rates compared to using the original training set, demonstrating it effectively increases generalization of the emotion recognition system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
On the-joint-optimization-of-performance-and-power-consumption-in-data-centersCemal Ardil
The document summarizes research on jointly optimizing performance and power consumption in data centers. It models the process of mapping tasks in a data center onto machines as a multi-objective problem to minimize both energy consumption and response time (makespan), subject to deadline and architectural constraints. It proposes using a simple goal programming technique that guarantees Pareto optimal solutions with good convergence. Simulation results show the technique achieves superior performance compared to other approaches and is competitive with optimal solutions for small-scale problems.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Fixed-Point Code Synthesis for Neural Networksgerogepatton
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
1. The document proposes a Modified Fuzzy C-Means (MFCM) algorithm to segment brain tumors in noisy MRI images.
2. The conventional Fuzzy C-Means algorithm is sensitive to noise, so the MFCM adds an adaptive filtering step during segmentation.
3. The MFCM incorporates neighboring pixel membership values to reduce each pixel's resistance to being clustered, improving segmentation in noisy images.
Highly Parallel Pipelined VLSI Implementation of Lifting Based 2D Discrete Wa...idescitation
The lifting scheme based Discrete Wavelet
Transform is a powerful tool for image processing
applications. The lack of disk space during transmission and
storage of images pushes the demand for high speed
implementation of efficient compression technique. This paper
proposes a highly pipelined and distributed VLSI architecture
of lifting based 2D DWT with lifting coefficients represented
in fixed point [2:14] format. Compared to conventional
architectures [11], [13]-[16], the proposed highly pipelined
architecture optimizes the design which increases
significantly the performance speed. The design raises the
operating frequency, at the expense of more hardware area.
In this paper, initially a software model of the proposed design
was developed using MATLAB ®. Corresponding to this
software model, an efficient highly parallel pipelined
architecture was designed and developed using verilog HDL
language and implemented in VIRTEX ® 6 (XC6VHX380T)
FPGA. Also the design was synthesized on TSMC 0.18μm
ASIC Library by using Synopsis Design Compiler. The entire
system is suitable for several real time applications.
An approach for color image compression of bmp and tiff images using dct and dwtIAEME Publication
This document summarizes a research paper that compares image compression using the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). It finds that DWT performs better than DCT in terms of Mean Square Error and Peak Signal to Noise Ratio. The paper analyzes compression of BMP and TIFF color images using DCT and DWT. It converts color images to grayscale, then compresses the grayscale images using DWT. DWT decomposes images into different frequency components and scales, allowing for better image compression compared to DCT.
The document discusses the chimera grid method for computational fluid dynamics simulations of complex geometries. It has two main elements: (1) decomposition of the computational domain into sub-domains that are each gridded independently, and (2) communication of solution data between sub-domains through interpolation. Overlapping grids allow each sub-domain to be gridded with structured grids while handling interfaces through hole and outer boundaries. The chimera grid method makes it possible to model problems with complex geometries using easier-to-generate body-fitted grids. It has been used successfully for simulations of configurations like the integrated space shuttle.
This document discusses techniques for representing digital circuit partitioning problems using graph representations. It presents three encoding techniques to map graph partitions to the problem domain: 1) a binary string where each bit represents a cell and its partition, 2) a string with two regions to represent vertices and edge crossings, and 3) a string with regions for vertices and edges. The techniques are evaluated in terms of suitability, with the second approach more suitable for dense circuits. Net cut evaluation is also described to analyze partitioning solutions.
Survey on clustering based color image segmentation and novel approaches to f...eSAT Journals
Abstract Segmentation is an important image processing technique that helps to analyze an image automatically. Applications involving detection or recognition of objects in images often include segmentation process. This paper describes two unsupervised clustering based color image segmentation techniques namely K-means clustering and Fuzzy C-means (FCM) clustering. The advantages and disadvantages of both K-means and Fuzzy C-means algorithm are also presented in this paper. K-means algorithm takes less computation time as compared to Fuzzy C-means algorithm which produces result close to that of K-means. On the other hand in FCM algorithm each pixel of an image can have membership to more than one cluster which is not in case of K-means algorithm, an advantage to FCM method. Color images contain wide variety of information and are more complicated than gray scale images. In image processing, though color image segmentation is a challenging task but provides a path for image analysis in practical application fields. Secondly some novel approaches to FCM algorithm for better image segmentation are also discussed such as SFCM (Spatial FCM) and THFCM (Thresholding FCM). Basic FCM algorithm does not take into consideration the spatial information of the image. SFCM specially focus on spatial details and contribute towards image segmentation results for image analysis. It introduces spatial function into FCM algorithm membership function and then operates with available spatial information. THFCM is another approach that focus on thresholding technique for image segmentation. It main task is to find a discerner cluster that will act as automatic threshold. These two approaches shows how better segmentation results can be obtained.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...CSCJournals
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
Machine Learning Algorithms for Image Classification of Hand Digits and Face ...IRJET Journal
This document discusses machine learning algorithms for image classification using five different classification schemes. It summarizes the mathematical models behind each classification algorithm, including Nearest Class Centroid classifier, Nearest Sub-Class Centroid classifier, k-Nearest Neighbor classifier, Perceptron trained using Backpropagation, and Perceptron trained using Mean Squared Error. It also describes two datasets used in the experiments - the MNIST dataset of handwritten digits and the ORL face recognition dataset. The performance of the five classification schemes are compared on these datasets.
This thesis investigates the Um air interface of GSM technology by designing and implementing a software-defined radio (SDR)-driven protocol analyzer. Samples from the radio link between a base transceiver station and mobile station are processed using a HackRF One SDR and transferred to a computer over USB. By demodulating the radio signal and estimating the digital information, network traffic can be monitored and the underlying protocols examined. The software implementation proposes an educational use for learning mobile communication protocols.
07 18sep 7983 10108-1-ed an edge edit ariIAESIJEECS
Edge exposure or edge detection is an important and classical study of the medical field and computer vision. Caliber Fuzzy C-means (CFCM) clustering Algorithm for edge detection depends on the selection of initial cluster center value. This endeavor to put in order a collection of pixels into a cluster, such that a pixel within the cluster must be more comparable to every other pixel. Using CFCM techniques first cluster the BSDS image, next the clustered image is given as an input to the basic canny edge detection algorithm. The application of new parameters with fewer operations for CFCM is fruitful. According to the calculation, a result acquired by using CFCM clustering function divides the image into four clusters in common. The proposed method is evidently robust into the modification of fuzzy c-means and canny algorithm. The convergence of this algorithm is very speedy compare to the entire edge detection algorithms. The consequences of this proposed algorithm make enhanced edge detection and better result than any other traditional image edge detection techniques.
The document provides an acknowledgment and abstract for a thesis on modeling the AODV routing protocol for mobile ad-hoc networks using colored Petri nets. The author thanks their supervisor and others for their guidance and support during the project. The abstract indicates that AODV will be modeled using colored Petri nets to evaluate performance measures like workload and packet transmission efficiency, and the results will be compared to simulations run in the NS2 network simulator.
Comparison Between Clustering Algorithms for Microarray Data AnalysisIOSR Journals
Currently, there are two techniques used for large-scale gene-expression profiling; microarray and
RNA-Sequence (RNA-Seq).This paper is intended to study and compare different clustering algorithms that used
in microarray data analysis. Microarray is a DNA molecules array which allows multiple hybridization
experiments to be carried out simultaneously and trace expression levels of thousands of genes. It is a highthroughput
technology for gene expression analysis and becomes an effective tool for biomedical research.
Microarray analysis aims to interpret the data produced from experiments on DNA, RNA, and protein
microarrays, which enable researchers to investigate the expression state of a large number of genes. Data
clustering represents the first and main process in microarray data analysis. The k-means, fuzzy c-mean, selforganizing
map, and hierarchical clustering algorithms are under investigation in this paper. These algorithms
are compared based on their clustering model.
IRJET- An Effective Brain Tumor Segmentation using K-means ClusteringIRJET Journal
This document presents a study on using k-means clustering for brain tumor segmentation from MRI images. It begins with an introduction to brain strokes and current segmentation techniques. It then describes the fuzzy c-means clustering algorithm and its limitations. The proposed method is to use k-means clustering for tumor segmentation, with preprocessing of MRI images followed by k-means clustering. Experimental results on brain MRI images show that k-means clustering can effectively segment tumors, with clearer edges compared to traditional algorithms like fuzzy c-means.
Segmentation and Classification of MRI Brain TumorIRJET Journal
This document presents a study comparing two techniques for detecting brain tumors in MRI images: level set segmentation and K-means segmentation. Features are extracted from the segmented tumors using discrete wavelet transform and gray level co-occurrence matrix. The features are then classified as benign or malignant using a support vector machine. The level set method and K-means method are evaluated based on accuracy, sensitivity, and specificity on a dataset of 41 MRI brain images. The level set method achieved slightly higher accuracy of 94.12% compared to the K-means method.
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
The present paper proposes a new approach of preprocessing for handwritten, printed and isolated numeral
characters. The new approach reduces the size of the input image of each numeral by discarding the redundant
information. This method reduces also the number of features of the attribute vector provided by the extraction
features method. Numeral recognition is carried out in this work through k nearest neighbors and multilayer
perceptron techniques. The simulations have obtained a good rate of recognition in fewer running time.
Taking into account how brain tumors and gliomas are notorious forms of cancer, the medical field has found several methods to diagnose these
diseases, with many algorithms that can segment out the cancer cells in the magnetic resonance imaging (MRI) scans of the brain. This paper has proposed a similar segmenting algorithm called a custom administering attention module. This solution uses a custom U-Net model along with a custom administering attention module that uses an attention mechanism to classify and segment the glioma cells using long-range dependency of the
feature maps. The customizations lead to a reduction in code complexity and memory cost. The final model has been tested on the BraTS 2019 dataset and has been compared with other state-of-the-art methods for displaying how much better the proposed model has performed in the category of enhancing,
non-enhancing and peritumoral gliomas.
This PhD thesis proposes methods for distributed iterative decoding and processing in wireless cooperative networks. It presents a block structure layered design for jointly designing channel encoders across all nodes in a wireless network. It also develops a generic implementation framework for sum-product algorithm message passing on factor graphs, using a proposed Karhunen-Loeve transform message representation. This framework can be applied to receiver processing in wireless networks. The thesis aims to advance both the global design of channel codes across network nodes, and the receiver processing of individual nodes, to better support wireless cooperative networks.
ENERGY PERFORMANCE OF A COMBINED HORIZONTAL AND VERTICAL COMPRESSION APPROACH...IJCNCJournal
Energy efficiency is an essential issue to be reckoned in wireless sensor networks development. Since the low-powered sensor nodes deplete their energy in transmitting the collected information, several strategies have been proposed to investigate the communication power consumption, in order to reduce the amount of transmitted data without affecting the information reliability. Lossy compression is a promising solution recently adapted to overcome the challenging energy consumption, by exploiting the data correlation and discarding the redundant information. In this paper, we propose a hybrid compression approach based on two dimensions specified as horizontal (HC) and vertical compression (VC), typically implemented in cluster-based routing architecture. The proposed scheme considers two key performance metrics, energy expenditure, and data accuracy to decide the adequate compression approach based on HC-VC or VC-HC configuration according to each WSN application requirement. Simulation results exhibit the performance of both proposed approaches in terms of extending the clustering network lifetime.
Extended Fuzzy C-Means with Random Sampling Techniques for Clustering Large DataAM Publications
Big data are any data that you cannot load into your computer’s primary memory. Clustering is a primary
task in pattern recognition and data mining. We need algorithms that scale well with the data size. The former
implementation, literal Fuzzy C-Means is linear or serialized. FCM algorithm attempts to partition a finite collection
of n elements into collection of c fuzzy clusters. So, given a finite set of data, this algorithm returns a list of c cluster
centers. However it doesn't scale well and slows down with increase in the size of data and is thus impractical and
sometimes undesirable. In this paper, we propose an extended version of fuzzy c-means clustering algorithm by means of various random sampling techniques to study which method scales well for large or very large data.
ANALOG MODELING OF RECURSIVE ESTIMATOR DESIGN WITH FILTER DESIGN MODELVLSICS Design
This document summarizes a research paper on implementing a low power design methodology for recursive encoders and decoders. It discusses how recursive coding can achieve better error correction performance at low signal-to-noise ratios compared to other codes. It then describes the design of a recursive decoder that uses the log-MAP algorithm to minimize power consumption. The decoder uses five main computational steps - branch metric calculation, forward metric computation, backward metric computation, log-likelihood ratio calculation, and extrinsic information calculation. It also compares the implementation of four-state and eight-state recursive encoders. The goal of the design is to optimize the power and area of recursive encoders and decoders.
This document proposes a new method to remove the dependence of fuzzy c-means clustering on random initialization. The conventional fuzzy c-means algorithm's performance is highly dependent on the randomly initialized membership values used to select initial centroids. The proposed method uses an algorithm by Yuan et al. to determine initial centroids without randomization. These centroids are then used as inputs to the conventional fuzzy c-means algorithm. The performance of the proposed method is compared to conventional fuzzy c-means using partition coefficient and clustering entropy validity indices. Results show the proposed method produces more consistent and better performance by removing the effect of random initialization.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
This document summarizes several cluster head selection techniques for mobile ad-hoc networks (MANETs). It begins with an introduction to MANETs and clustering in MANETs. It then surveys 12 different cluster head selection techniques: Lowest ID, Highest Degree, K-hop Connectivity ID, Mobility Based D-hop, Adaptive Cluster Load Balance, Least Cluster Change, Load Balancing, Power-aware Dominant Set, Weighted Approach, Max-Min D-cluster Formation, and Mobility Based Cluster Formation. It provides a brief description of each technique and analyzes their merits and demerits. Finally, it concludes that different techniques select the cluster head based on various parameters like node ID,
A Survey Paper on Cluster Head Selection Techniques for Mobile Ad-Hoc NetworkIOSR Journals
This document summarizes several cluster head selection techniques for mobile ad-hoc networks (MANETs). It discusses techniques that select the cluster head based on attributes like node ID, degree of connectivity, mobility, load balancing, and power consumption. Some techniques aim to improve stability and reduce overhead by minimizing cluster changes. Each technique has advantages like simplicity or load balancing, and disadvantages like additional messaging or inability to eliminate ties between nodes. The survey provides a comparison of the techniques on their selection criteria and merits and demerits.
Similar to CSC 347 – Computer Hardware and Maintenance (20)
A portfolio resume is a type of creative resume that showcases examples of your work along with the usual resume information about your work experience. ... Resume portfolios can also work well for some other industries, like teaching, in which showing professional creativity and examples of your work is a bonus.
Computational chemistry allows a computer to understand a specific aspect of science — such as the structure of a protein — and then learn how it functions. Applications of coupling computers and chemistry include creating solar cells and drugs and optimizing motor vehicles.
Online resort reservation system report (practicum)Sumaiya Ismail
An online reservation system is a software you can use for managing reservations. They allow hotels, tours, and activity operators to accept bookings online and better manage their phone and in-person bookings. They also do so much more than that.
Comparative study of microprocessor perspective of historical preferenceSumaiya Ismail
Thesis on Microprocessor.
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit on a single integrated circuit chip containing millions of very small components including transistors, resistors, and diodes that work together.
A job description or JD is a written narrative that describes the general tasks, or other related duties, and responsibilities of a position. ... The analysis considers the areas of knowledge, skills and abilities needed to perform the job.
Stuxnet is a malicious computer worm, first uncovered in 2010, thought to have been in development since at least 2005. Stuxnet targets supervisory control and data acquisition (SCADA) systems and is believed to be responsible for causing substantial damage to the nuclear program of Iran.
Sumaiya Ismail is applying for an ICT teacher position at Insight International School in Dhaka. She has over two years of teaching experience as a home tutor and feels confident she could be a great addition to the school. She saw the job posting online and believes she meets all of the requirements. She is prepared to interview for the role and hopes to gain experience teaching ICT subjects in a school setting.
Good Cybercitizens Make the Internet a Safer Place
Own your online presence. To keep yourself safe, set privacy and security settings on web services, apps, and devices to your comfort level. ...
Be a good digital citizen. ...
Respect yourself and others. ...
Practice good communications. ...
Protect yourself and your information.
Biometrics is the most secure and suitable authentication tool. It is the automated method of recognizing a person based on a physiological or behavioral characteristic. Biometric authentication is used in computer Science for verifying human identity.
Comparison and contrast on studying at north south university campus and stud...Sumaiya Ismail
This document is a comparison and contrast essay on studying at North South University campus versus studying at home. It begins with an introduction stating that due to the pandemic, classes have moved online and the student must now study at home.
The comparison section notes three similarities: both environments encourage character development through interaction with others, emphasize discipline and punctuality, and promote self-learning through assignments.
The contrast section identifies three differences: studying at home allows for a quieter environment without distractions, group study is not possible at home, and home lacks the university's lab facilities.
The conclusion acknowledges different perspectives but argues that both environments are needed for a complete education, as university ensures proper higher education while home provides a
Food ordering system for red bangladesh course system ananlysisSumaiya Ismail
This document describes a proposed food ordering system for Red Bangladesh. The system aims to make the food ordering and payment process online to address issues with the existing manual paper-based system. Key features of the proposed system include login accounts for accurate records, minimized data entry time, menu variations, and order tracking. Entity relationship and data flow diagrams are provided to explain the system design. Effort distribution, timelines, and cost estimates are also included. Screenshots demonstrate features like user interfaces for ordering and admin functions. The conclusion states the system will improve the food distribution process.
Landslide monitoring using wireless sensor networkSumaiya Ismail
This document summarizes a proposed landslide monitoring system using a wireless sensor network. It discusses how sensor nodes would collect data on factors like slope movement and moisture levels, forward it to a gateway, and ultimately to a field management center. An example is given of how such a system could monitor railway lines in mountainous areas. The current disadvantages of high cost, limited range, and slow data transmission are identified. An improved design is proposed using Zigbee modules for their low cost, high data rates, and ability to communicate over a broader range. The conclusion states the goal is to optimize the system to reduce costs and energy use while extending the lifetime of the wireless sensor network.
The document provides a summary of a project report for a Food Ordering System for RED Bangladesh. It includes an introduction describing the purpose of the system, to make food ordering, payment, and services online. It then summarizes the existing manual paper-based system and problems with it. The proposed system aims to address these issues by providing an online system with advantages like accurate records, minimized time and effort. The report also discusses system feasibility studies and the logical and physical design of the proposed online food ordering system.
The document discusses the history and uses of the internet. It begins with defining the internet as a network of computers around the world. It then explains the development of the World Wide Web and browsers that allow users to access and navigate web pages. Popular search engines are also discussed that help users locate information on the vast internet. A variety of uses of the internet are presented along with the conclusion that while the internet provides many advantages, it also has disadvantages and is an important part of modern life.
Strategies of improving Communication between University & Students Sumaiya Ismail
This document appears to list the names and ID numbers of 35 students along with some brief descriptions of student-student, student-teacher, and teacher-teacher communications. It also includes two website URLs for communication and content definitions.
The document appears to be related to a spelling bee competition at IISC, as it contains the phrase "WELCOME TO SPELLING BUZZ OF IISC" repeated multiple times. Each section provides a scrambled word or phrase for students to unscramble. There are 12 scrambled words/phrases included in total for students to solve as part of the spelling bee competition.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
CSC 347 – Computer Hardware and Maintenance
1. I
International University of Business Agriculture and Technology
(IUBAT Universiy)
Report
on
“Secondary Memory"
This report is prepared for partial fulfillment of
CSC 347 – Computer Hardware and Maintenance
Prepared For:
Abhijit Saha, Ph.D
Course Instructor
Department of Computer Science and Engineering
IUBAT University
Prepared By:
WGM Group – C5
Spring 2017
3. III
Table of Contents
1. Introduction……………………………………………………………………………………
2. Lskfjlksadf……………………………………………………………………………………..
2.1 Jksfhgjs
2.2 Djkfhjkhf
3. sdhfhsdjh
1
3
4
7
7
Title Fly…………………………………………………………………………………………………..I-VII
Chapter 1: Introduction………………………………………………………………………...……..……1
Chapter 2: Secondary memory………………………………………………………………………..……1
2.0. Secondary memory……………………………………………………………………………… …..…..1
2.1. Types of Secondary memory………………………………………………………………………....1
2.2. Some latest secondary memory………………………………………………………………………2
2.3. Modern examples by shape…………………………………………………………………..……….3
Chapter 3: Floppy disk……………………………………………………………………...……..…….4
3.0. Floppy disk…………………………………………………………………………………..………..4
3.1. How a Floppy Disk Works ………………………………………………………..………….………4
3.2. Advantages…………………………………………………………………………………..…..……4
3.3. Disadvantages……………………………………………………………………………………...….4
Chapter 4: Hard Disk Drive………………………………………………………………………..…....5
4.0. Hard Disk Drive ………………………………………………………………………..…………….5
4.1. External and Internal hard drives…………………………………………………………….………..5
4.2. Types of Computer Hard Drive……………………………………….................................................5
Chapter 5: Optical Memory…………………………………………………………..……...…….……10
5.0. Optical Memory…………………………………………………………………………….………..10
5.1. CD ………………………………………………………………………………………………........10
5.2. DVD……………………………………………………………………………………………..…..11
Chapter 6: External devices of the Secondary Memory………………………………………….……15
6.0. External devices………………………………………………………………………………….…...15
6.4Jazz disk……………………………………………………………………………………………….19
6.4.1. Some careens when we use Jazz disk……………………………………………………..….19
Chapter 7: Flash Memory: ………………………………………………………………..…………….20
7.0. Flash Memory…………………………………………………………………………….…………..20
7.1. Uses Flash technology…………………………………………………………………….………….20
7.2. Types of Flash Memory………………………………………………………………………………21
4. IV
7.2.1. NOR flash……………………………………………………………………………...……..21
7.2.2. NAND flash…………………………………………………………………………………..22
Chapter 8: Installation……………………..……………………………………………………………25
8.0. How to install Hard Drive……………………..……………………………………………………..25
8.1. Desktop hard drive……………………………..…………………………………………………….25
8.2. Laptop Hard Drive………………………………..………………………………………………… 27
Chapter 9: Comparison among secondary storages…………………………………………………..27
9.1. Hard drive vs magnetic tape…………………………………………………………………………..27
9.2. CD vs DVD………………………………………………………………………………….……….28
9.3. Floppy drive vs USB drive………………………………………………………...............................28
Chapter 10: Market Available Secondary Memory…………………………..………………………28
10.1.Top hard disks available in the market………………………………………..…………………….28
10.2. Market Available USB drives………………………………………………..……………………..29
10. 3. Some Optical drive Brands in market…………………………………………………………..30
Chapter 11 : Conclusion……………………………………………………..……………….………30
5. V
List of Figures
Figure 4.1: parts of hard disk…………………………………………………………….6
Figure 6.1. Icon for the Zip drive………………………………………………………16
Figure 7.1. Programming a NORmemory cell………………………………………….21
Figure 7.2. Erasing a NOR memorycell…………………………………………………21
.
6. VI
List of Tables
Table I. Speed of a CD- Audio……………………………………………………………………….11
Table II. CD Physical Specifications………………………………………………………………….11
8. 1
1. Introduction
Network operators and system administrators are interested in the mixture of traffic carried in their
networks for several reasons. Knowledge about traffic composition is valuable for network
planning, accounting, security, and traffic control. Traffic control includes packet scheduling and
intelligent buffer management to provide the quality of service (QoS) needed by applications. It is
necessary to determine to which applications packets belong, but traditional protocol layering
principles restrict the network to processing only the IP packet header.
1.1sfjalskfjksadjdfakl
In Section 2, we review the previous work in traffic classification. Section 3 addresses the
question of usefulfeatures and number of QoS classes. We describe experiments with unsupervised
clustering of real traffic traces to build classification rules. Given the discovered QoS classes,
Section 4 presents experimental evaluation of classification accuracy using k-nearest neighbor
compared to minimum mean distance clustering.
2. Related Work
Research in traffic classification, which avoids payload inspection, has accelerated over the last
five years. It is generally difficult to compare different approaches, because they vary in the
selection of features (some requiring inspection of the packet payload), choice of supervised or
unsupervised classification algorithms, and set of classified traffic classes. The wide range of
previous approaches can be seen in the comprehensive survey by Nguyen and Armitage [1].
Further complicating comparisons between different studies is the fact that classification
performance depends on how the classifier is trained and the test data used to evaluate accuracy.
Unfortunately, a universal set of test traffic data does not exist to allow uniform comparisons of
different classifiers.
A common approach is to classify traffic on the basis of flows instead of individual packets.
Trussell et al. proposed the distribution of packet lengths as a useful feature [2]. McGregor et al.
used a variety of features: packet length statistics, interarrival times, byte counts, connection
duration [3]. Flows with similar features were grouped together using EM
(expectation-maximization) clustering. Having found the clusters representing a set of traffic
classes, the features contributing little were deleted to simplify classification and the clusters were
recomputed with the reduced feature set. EM clustering was also studied by Zander, Nguyen, and
Armitage [4]. Sequential forward selection (SFS) was used to reduce the feature set. The same
authors also tried AutoClass,an unsupervised Bayesian classifier,for cluster formation and SFS for
feature set reduction [5].….
3. Unsupervised Clustering
3.1 Self-Organizing Map
SOM is trained iteratively. In each training step, one sample vector x from the input data pool is
chosen randomly, and the distances between it and all the SOM codebook vectors are calculated
9. 2
using some distance measure. The neuron whose codebook vector is closest to the input vector is
called the best-matching unit (BMU),denoted by
mc :
x mc min
i
x mi (1)
where
is the Euclidean distance, and
mi are the codebook vectors.
After finding BMU, the SOM codebook vectors are updated, such that the BMU is moved closer
to the input vector. The topological neighbors of BMU are also treated this way. This procedure
moves BMU and its topological neighbors towards the sample vectors. The update rule for the ith
codebook vector is:
mi(n1) mi(n)r(n)hci(n)[x(n)mi(n)] (2)
where n is the training iteration number, x(t) is an input vector randomly selected from the input
data set at the nth training,
r (n) is the learning rate in the nth training, and
hci(n) is the kernel
function around BMU
mc . The kernel function defines the region of influence that x has on the
map.
…
Fig. 1 shows the U-matrix and the components planes for the feature variables. The U-matrix is a
visualization of distance between neurons, where distance is color coded according to the spectrum
shown next to the map. Blue areas represent codebook vectors close to each other in input space,
i.e., clusters.
Fig. 1. U-matrix with 7 components scaled to [0,1]
3.2 K-Means Clustering
The K-means clustering algorithm starts with a training data set and a given number of clusters K.
The samples in the training data set are assigned to a cluster based on a similarity measurement.
Euclidean distance is generally used to measure the similarity. The K-means algorithm tries to find
an optimal solution by minimizing the square error:
Er x j ci
2
j1
n
i1
K
(3)
10. 3
where K is the number of clusters and n is the number of training samples,
ci is the center of the ith
cluster,
x ci is the Euclidean distance between sample x and center
ci of the ith cluster.
…
4. Experimental Classification Results and Analysis
The previous section identified three clusters for QoS classes and features to build up classification
rules through unsupervised learning. In this section, the accuracy of the classification rules is
evaluated experimentally. For classification, we chose the K-nearest neighbor (KNN) algorithm.
Experimental results are compared with the minimum mean distance (MMD) classifier.
The selected application lists for each class and the number of applications in each class are
shown in Table I.
Table I. Applications in each class
Class Applications Total number
Transactional 53/TCP, 13/TCP, 111/TCP,… 112
Interactive
23/TCP, 21/TCP, 43/TCP, 513/TCP, 514/TCP, 540/TCP,
251/TCP, 1017/TCP, 1019/TCP, 1020/TCP, 1022/TCP,…
77
Bulk data
80/TCP, 20/TCP, 25/TCP, 70/TCP, 79/TCP, 81/TCP,
82/TCP, 83/TCP, 84/TCP, 119/TCP, 210/TCP, 8080/TCP,…
1351
5. Conclusion
Traffic classification was carried out in two phases. In the first off-line phase, we started with no
assumptions about traffic classes and used the unsupervised SOM and K-means clustering
algorithms to find the structure in the traffic data. The data exploration procedure found three
clusters corresponding to three QoS classes:transactional, interactive, and bulk data transfer. …
In the second classification phase,the accuracy of the KNN classifier was evaluated for test data.
Leave-one-out cross-validation tests showed that this algorithm had a low error rate. The KNN
classifier was found to have an error rate of about 2 percent for the test data, compared to an error
rate of 7 percent for a MMD classifier. KNN is one of the simplest classification algorithms,but not
necessarily the most accurate. Other supervised algorithms, such as back propagation (BP) and
SVM, also have attractive features and should be compared in future work.
References
[1] Thuy Nguyen and Grenville Armitage, “A survey of techniques for Internet traffic classification using
machine learning,” IEEE Communications Surveys and Tutorials, vol. 10, no. 4, pp. 56-76, November,
2008.
[2] H. Trussell, A. Nilsson, P. Patel and Y. Wang, “Estimation and detection of network traffic,” in Proc. of
11th Digital Signal Processing Workshop, pp. 246-248, January 12-16, 2004.
[3] Anthony McGregor, Mark Hall, Perry Lorier and James Brunskill, “Flow clustering using machine
learning techniques,” in Proc. of 5th Int. Workshop on Passive and Active Network Measurement,
11. 4
pp.205-214, June 2-7, 2004.
[4] Sebastian Zander, Thuy Nguyen and Grenville Armitage, “Self-learning IP traffic classification based
on statistical flow characteristics,” in Proc. of 6th Int. Workshop on Passive and Active Measurement,
pp.325-328, March 23-27, 2005.
[5] Sebastian Zander, Thuy Nguyen and Grenville Armitage, “Automated traffic classification and
application identification using machine learning,” in Proc. of IEEE Conf. on Local Computer Networks,
pp.250-257, February 11-12, 2005.
[6] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd Edition, Wiley, New York,
2009.