Recently machine learning has been introduced into the area of adaptive video streaming. This paper explores a novel taxonomy that includes six state of the art techniques of machine learning that have been applied to Dynamic Adaptive Streaming over HTTP (DASH): (1) Q-learning, (2) Reinforcement learning, (3) Regression, (4) Classification, (5) Decision Tree learning, and (6) Neural networks.
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGijdkp
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time
grows as the size and complexity of the training dataset increases. For this reason, it is important to have
efficient computational methods and algorithms that can be applied on large datasets, such that it is still
possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper
a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model
(HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances
with similar inputs are clustered and a weight factor which represents the frequency of these instances is
assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity
function to cluster similar examples. In the second step, all formulas in the classical HMM training
algorithm (EM) associated with the number of training instances are modified to include the weight factor
in appropriate terms. This process significantly accelerates HMM training while maintaining the same
initial, transition and emission probabilities matrixes as those obtained with the classical HMM training
algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set,
speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach
is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Facial image retrieval on semantic features using adaptive mean genetic algor...TELKOMNIKA JOURNAL
The emergence of larger databases has made image retrieval techniques an essential component and has led to the development of more efficient image retrieval systems. Retrieval can either be content or text-based. In this paper, the focus is on the content-based image retrieval from the FGNET database. Input query images are subjected to several processing techniques in the database before computing the squared Euclidean distance (SED) between them. The images with the shortest Euclidean distance are considered as a match and are retrieved. The processing techniques involve the application of the median modified Weiner filter (MMWF), extraction of the low-level features using histogram-oriented gradients (HOG), discrete wavelet transform (DWT), GIST, and Local tetra pattern (LTrP). Finally, the features are selected using Adaptive Mean Genetic Algorithm (AMGA). In this study, the average PSNR value obtained after applying the Wiener filter was 45.29. The performance of the AMGA was evaluated based on its precision, F-measure, and recall, and the obtained average values were respectively 0.75, 0.692, and 0.66. The performance matrix of the AMGA was compared to those of particle swarm optimization algorithm (PSO) and genetic algorithm (GA) and found to perform better; thus, proving its efficiency.
Semantic Image Retrieval Using Relevance Feedback dannyijwest
This paper presents optimized interactive content-based image retrieval framework based on AdaBoost
learning method. As we know relevance feedback (RF) is online process, so we have optimized the learning
process by considering the most positive image selection on each feedback iteration. To learn the system we
have used AdaBoost. The main significances of our system are to address the small training sample and to
reduce retrieval time. Experiments are conducted on 1000 semantic colour images from Corel database to
demonstrate the effectiveness of the proposed framework. These experiments employed large image
database and combined RCWFs and DT-CWT texture descriptors to represent content of the images.
High performance modified bit-vector based packet classification module on lo...IJECEIAES
The packet classification plays a significant role in many network systems, which requires the incoming packets to be categorized into different flows and must take specific actions as per functional and application requirements. The network system speed is continuously increasing, so the demand for the packet classifier also increased. Also, the packet classifier's complexity is increased further due to multiple fields should match against a large number of rules. In this manuscript, an efficient and high performance modified bitvector (MBV) based packet classification (PC) is designed and implemented on low-cost Artix-7 FPGA. The proposed MBV based PC employs pipelined architecture, which offers low latency and high throughput for PC. The MBV based PC utilizes <2% slices, operating at 493.102 MHz, and consumes 0.1 W total power on Artix-7 FPGA. The proposed PC considers only 4 clock cycles to classify the incoming packets and provides 74.95 Gbps throughput. The comparative results in terms of hardware utilization and performance efficiency of proposed work with existing similar PC approaches are analyzed with better constraints improvement.
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkPutra Wanda
Deep learning (DL) has achieved a significant performance in computer vision problems, mainly in automatic feature extraction and representation. However, it is not easy to determine the best pooling method in a different case study. For instance, experts can implement the best types of pooling in image processing cases, which might not be optimal for various tasks. Thus, it is
required to keep in line with the philosophy of DL. In dynamic neural network architecture, it is not practically possible to find
a proper pooling technique for the layers. It is the primary reason why various pooling cannot be applied in the dynamic and multidimensional dataset. To deal with the limitations, it needs to construct an optimal pooling method as a better option than max pooling and average pooling. Therefore, we introduce a dynamic pooling layer called RunPool to train the convolutional
neuralnetwork(CNN)architecture.RunPoolpoolingisproposedtoregularizetheneuralnetworkthatreplacesthedeterministic
pooling functions. In the final section, we test the proposed pooling layer to address classification problems with online social network (OSN) dataset
Technical analysis of content placement algorithms for content delivery netwo...IJECEIAES
Content placement algorithm is an integral part of the cloud-based content de-livery network. They are responsible for selecting a precise content to be re-posited over the surrogate servers distributed over a geographical region. Although various works are being already carried out in this sector, there are loopholes connected to most of the work, which doesn't have much disclosure. It is already known that quality of service, quality of experience, and the cost is always an essential objective targeting to be improved in existing work. Still, there are various other aspects and underlying reasons which are equally important from the design aspect. Therefore, this paper contributes towards reviewing the existing approaches of content placement algorithm over cloud-based content delivery network targeting to explore open-end re-search issues.
TEST-COST-SENSITIVE CONVOLUTIONAL NEURAL NETWORKS WITH EXPERT BRANCHESsipij
It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in many
problems, but this accuracy comes with a high computational cost. Also, input instances have not the same
difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive
method for convolutional neural networks. This method trains a CNN with a set of auxiliary outputs and
expert branches in some middle layers of the network. The expert branches decide to use a shallower part
of the network or going deeper to the end, based on the difficulty of input instance. The expert branches
learn to determine: is the current network prediction is wrong and if the given instance passed to deeper
layers of the network it will generate right output; If not, then the expert branches stop the computation
process. The experimental results on standard dataset CIFAR-10 show that the proposed method can train
models with lower test-cost and competitive accuracy in comparison with basic models.
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGijdkp
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time
grows as the size and complexity of the training dataset increases. For this reason, it is important to have
efficient computational methods and algorithms that can be applied on large datasets, such that it is still
possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper
a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model
(HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances
with similar inputs are clustered and a weight factor which represents the frequency of these instances is
assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity
function to cluster similar examples. In the second step, all formulas in the classical HMM training
algorithm (EM) associated with the number of training instances are modified to include the weight factor
in appropriate terms. This process significantly accelerates HMM training while maintaining the same
initial, transition and emission probabilities matrixes as those obtained with the classical HMM training
algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set,
speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach
is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Facial image retrieval on semantic features using adaptive mean genetic algor...TELKOMNIKA JOURNAL
The emergence of larger databases has made image retrieval techniques an essential component and has led to the development of more efficient image retrieval systems. Retrieval can either be content or text-based. In this paper, the focus is on the content-based image retrieval from the FGNET database. Input query images are subjected to several processing techniques in the database before computing the squared Euclidean distance (SED) between them. The images with the shortest Euclidean distance are considered as a match and are retrieved. The processing techniques involve the application of the median modified Weiner filter (MMWF), extraction of the low-level features using histogram-oriented gradients (HOG), discrete wavelet transform (DWT), GIST, and Local tetra pattern (LTrP). Finally, the features are selected using Adaptive Mean Genetic Algorithm (AMGA). In this study, the average PSNR value obtained after applying the Wiener filter was 45.29. The performance of the AMGA was evaluated based on its precision, F-measure, and recall, and the obtained average values were respectively 0.75, 0.692, and 0.66. The performance matrix of the AMGA was compared to those of particle swarm optimization algorithm (PSO) and genetic algorithm (GA) and found to perform better; thus, proving its efficiency.
Semantic Image Retrieval Using Relevance Feedback dannyijwest
This paper presents optimized interactive content-based image retrieval framework based on AdaBoost
learning method. As we know relevance feedback (RF) is online process, so we have optimized the learning
process by considering the most positive image selection on each feedback iteration. To learn the system we
have used AdaBoost. The main significances of our system are to address the small training sample and to
reduce retrieval time. Experiments are conducted on 1000 semantic colour images from Corel database to
demonstrate the effectiveness of the proposed framework. These experiments employed large image
database and combined RCWFs and DT-CWT texture descriptors to represent content of the images.
High performance modified bit-vector based packet classification module on lo...IJECEIAES
The packet classification plays a significant role in many network systems, which requires the incoming packets to be categorized into different flows and must take specific actions as per functional and application requirements. The network system speed is continuously increasing, so the demand for the packet classifier also increased. Also, the packet classifier's complexity is increased further due to multiple fields should match against a large number of rules. In this manuscript, an efficient and high performance modified bitvector (MBV) based packet classification (PC) is designed and implemented on low-cost Artix-7 FPGA. The proposed MBV based PC employs pipelined architecture, which offers low latency and high throughput for PC. The MBV based PC utilizes <2% slices, operating at 493.102 MHz, and consumes 0.1 W total power on Artix-7 FPGA. The proposed PC considers only 4 clock cycles to classify the incoming packets and provides 74.95 Gbps throughput. The comparative results in terms of hardware utilization and performance efficiency of proposed work with existing similar PC approaches are analyzed with better constraints improvement.
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkPutra Wanda
Deep learning (DL) has achieved a significant performance in computer vision problems, mainly in automatic feature extraction and representation. However, it is not easy to determine the best pooling method in a different case study. For instance, experts can implement the best types of pooling in image processing cases, which might not be optimal for various tasks. Thus, it is
required to keep in line with the philosophy of DL. In dynamic neural network architecture, it is not practically possible to find
a proper pooling technique for the layers. It is the primary reason why various pooling cannot be applied in the dynamic and multidimensional dataset. To deal with the limitations, it needs to construct an optimal pooling method as a better option than max pooling and average pooling. Therefore, we introduce a dynamic pooling layer called RunPool to train the convolutional
neuralnetwork(CNN)architecture.RunPoolpoolingisproposedtoregularizetheneuralnetworkthatreplacesthedeterministic
pooling functions. In the final section, we test the proposed pooling layer to address classification problems with online social network (OSN) dataset
Technical analysis of content placement algorithms for content delivery netwo...IJECEIAES
Content placement algorithm is an integral part of the cloud-based content de-livery network. They are responsible for selecting a precise content to be re-posited over the surrogate servers distributed over a geographical region. Although various works are being already carried out in this sector, there are loopholes connected to most of the work, which doesn't have much disclosure. It is already known that quality of service, quality of experience, and the cost is always an essential objective targeting to be improved in existing work. Still, there are various other aspects and underlying reasons which are equally important from the design aspect. Therefore, this paper contributes towards reviewing the existing approaches of content placement algorithm over cloud-based content delivery network targeting to explore open-end re-search issues.
TEST-COST-SENSITIVE CONVOLUTIONAL NEURAL NETWORKS WITH EXPERT BRANCHESsipij
It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in many
problems, but this accuracy comes with a high computational cost. Also, input instances have not the same
difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive
method for convolutional neural networks. This method trains a CNN with a set of auxiliary outputs and
expert branches in some middle layers of the network. The expert branches decide to use a shallower part
of the network or going deeper to the end, based on the difficulty of input instance. The expert branches
learn to determine: is the current network prediction is wrong and if the given instance passed to deeper
layers of the network it will generate right output; If not, then the expert branches stop the computation
process. The experimental results on standard dataset CIFAR-10 show that the proposed method can train
models with lower test-cost and competitive accuracy in comparison with basic models.
Integration of a Predictive, Continuous Time Neural Network into Securities M...Chris Kirk, PhD, FIAP
This paper describes recent development and test implementation of a continuous time recurrent neural network that has been configured to predict rates of change in securities. It presents outcomes in the
context of popular technical analysis indicators and highlights the potential impact of continuous predictive capability on securities
market trading operations.
Routing in Wireless Mesh Networks: Two Soft Computing Based Approachesijmnct
Due to dynamic network conditions, routing is the most critical part in WMNs and needs to be optimised.
The routing strategies developed for WMNs must be efficient to make it an operationally self configurable
network. Thus we need to resort to near shortest path evaluation. This lays down the requirement of some
soft computing approaches such that a near shortest path is available in an affordable computing time. This
paper proposes a Fuzzy Logic based integrated cost measure in terms of delay, throughput and jitter.
Based upon this distance (cost) between two adjacent nodes we evaluate minimal shortest path that updates
routing tables. We apply two recent soft computing approaches namely Big Bang Big Crunch (BB-BC) and
Biogeography Based Optimization (BBO) approaches to enumerate shortest or near short paths. BB-BC
theory is related with the evolution of the universe whereas BBO is inspired by dynamical equilibrium in
the number of species on an island. Both the algorithms have low computational time and high convergence
speed. Simulation results show that the proposed routing algorithms find the optimal shortest path taking
into account three most important parameters of network dynamics. It has been further observed that for
the shortest path problem BB-BC outperforms BBO in terms of speed and percent error between the
evaluated minimal path and the actual shortest path.
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Security and imperceptibility improving of image steganography using pixel al...IJECEIAES
Information security is one of the main aspects of processes and methodologies in the technical age of information and communication. The security of information should be a key priority in the secret exchange of information between two parties. In order to ensure the security of information, there are some strategies that are used, and they include steganography and cryptography. An effective digital image-steganographic method based on odd/even pixel allocation and random function to increase the security and imperceptibility has been improved. This lately developed outline has been verified for increasing the security and imperceptibility to determine the existent problems. Huffman coding has been used to modify secret data prior embedding stage; this modified equivalent secret data that prevent the secret data from attackers to increase the secret data capacities. The main objective of our scheme is to boost the peak-signal-to-noise-ratio (PSNR) of the stego cover and stop against any attack. The size of the secret data also increases. The results confirm good PSNR values in addition of these findings confirmed the proposed method eligibility.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
In recent machine learning community, there is a trend of constructing a linear logarithm version of
nonlinear version through the ‘kernel method’ for example kernel principal component analysis, kernel
fisher discriminant analysis, support Vector Machines (SVMs), and the current kernel clustering
algorithms. Typically, in unsupervised methods of clustering algorithms utilizing kernel method, a
nonlinear mapping is operated initially in order to map the data into a much higher space feature, and then
clustering is executed. A hitch of these kernel clustering algorithms is that the clustering prototype resides
in increased features specs of dimensions and therefore lack intuitive and clear descriptions without
utilizing added approximation of projection from the specs to the data as executed in the literature
presented. This paper aims to utilize the ‘kernel method’, a novel clustering algorithm, founded on the
conventional fuzzy clustering algorithm (FCM) is anticipated and known as kernel fuzzy c-means algorithm
(KFCM). This method embraces a novel kernel-induced metric in the space of data in order to interchange
the novel Euclidean matric norm in cluster prototype and fuzzy clustering algorithm still reside in the space
of data so that the results of clustering could be interpreted and reformulated in the spaces which are
original. This property is used for clustering incomplete data. Execution on supposed data illustrate that
KFCM has improved performance of clustering and stout as compare to other transformations of FCM for
clustering incomplete data.
A SIMPLE PROCESS TO SPEED UP MACHINE LEARNING METHODS: APPLICATION TO HIDDEN ...cscpconf
An intrinsic problem of classifiers based on machine learning (ML) methods is that their
learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model (HMM) training. The idea of the proposed process consists of two steps. The first one involves a preprocessing step to reduce the number of training instances with any information loss. In this step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. In the second step, all formulas in theclassical HMM training algorithm (EM) associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Operating Task Redistribution in Hyperconverged Networks IJECEIAES
In this article, a searching method for the rational task distribution through the nodes of a hyperconverged network is presented in which it provides the rational distribution of task sets towards a better performance. With using new subsettings related to distribution of nodes in the network based on distributed processing, we can minimize average packet delay. The distribution quality is provided with using a special objective function considering the penalties in the case of having delays. This process is considered in order to create the balanced delivery systems. The initial redistribution is determined based on the minimum penalty. After performing a cycle (iteration) of redistribution in order to have the appropriate task distribution, a potential system is formed for functional optimization. In each cycle of the redistribution, a rule for optimizing contour search is used. Thus, the obtained task distribution, including the appeared failure and success, will be rational and can decrease the average packet delay in the hyperconverged networks. The effectiveness of our proposed method is evaluated by using the model of hyperconverged support system of the university E-learning provided by V.N. Karazin Kharkiv National University. The simulation results based on the model clearly confirm the acceptable and better performance of our approach in comparison to the classical approach of task distribution.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Imageproc...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
WMNs: The Design and Analysis of Fair Schedulingiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
The wide usage of the Internet and the availability of powerful computers and high-speed networks as low cost
commodity components have a deep impact on the way we use computers today, in such a way that
these technologies facilitated the usage of multi-owner and geographically distributed resources to address
large-scale problems in many areas such as science, engineering, and commerce. The new paradigm of
Grid computing has evolved from these researches on these topics. Performance and utilization of the grid
depends on a complex and excessively dynamic procedure of optimally balancing the load among the
available nodes. In this paper, we suggest a novel two-dimensional figure of merit that depict the network
effects on load balance and fault tolerance estimation to improve the performance of the network
utilizations. The enhancement of fault tolerance is obtained by adaptively decrease replication time and
message cost. On the other hand, load balance is improved by adaptively decrease mean job response time.
Finally, analysis of Genetic Algorithm, Ant Colony Optimization, and Particle Swarm Optimization is
conducted with regards to their solutions, issues and improvements concerning load balancing in
computational grid. Consequently, a significant system utilization improvement was attained. Experimental
results eventually demonstrate that the proposed method's performance surpasses other methods.
Integration of a Predictive, Continuous Time Neural Network into Securities M...Chris Kirk, PhD, FIAP
This paper describes recent development and test implementation of a continuous time recurrent neural network that has been configured to predict rates of change in securities. It presents outcomes in the
context of popular technical analysis indicators and highlights the potential impact of continuous predictive capability on securities
market trading operations.
Routing in Wireless Mesh Networks: Two Soft Computing Based Approachesijmnct
Due to dynamic network conditions, routing is the most critical part in WMNs and needs to be optimised.
The routing strategies developed for WMNs must be efficient to make it an operationally self configurable
network. Thus we need to resort to near shortest path evaluation. This lays down the requirement of some
soft computing approaches such that a near shortest path is available in an affordable computing time. This
paper proposes a Fuzzy Logic based integrated cost measure in terms of delay, throughput and jitter.
Based upon this distance (cost) between two adjacent nodes we evaluate minimal shortest path that updates
routing tables. We apply two recent soft computing approaches namely Big Bang Big Crunch (BB-BC) and
Biogeography Based Optimization (BBO) approaches to enumerate shortest or near short paths. BB-BC
theory is related with the evolution of the universe whereas BBO is inspired by dynamical equilibrium in
the number of species on an island. Both the algorithms have low computational time and high convergence
speed. Simulation results show that the proposed routing algorithms find the optimal shortest path taking
into account three most important parameters of network dynamics. It has been further observed that for
the shortest path problem BB-BC outperforms BBO in terms of speed and percent error between the
evaluated minimal path and the actual shortest path.
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Security and imperceptibility improving of image steganography using pixel al...IJECEIAES
Information security is one of the main aspects of processes and methodologies in the technical age of information and communication. The security of information should be a key priority in the secret exchange of information between two parties. In order to ensure the security of information, there are some strategies that are used, and they include steganography and cryptography. An effective digital image-steganographic method based on odd/even pixel allocation and random function to increase the security and imperceptibility has been improved. This lately developed outline has been verified for increasing the security and imperceptibility to determine the existent problems. Huffman coding has been used to modify secret data prior embedding stage; this modified equivalent secret data that prevent the secret data from attackers to increase the secret data capacities. The main objective of our scheme is to boost the peak-signal-to-noise-ratio (PSNR) of the stego cover and stop against any attack. The size of the secret data also increases. The results confirm good PSNR values in addition of these findings confirmed the proposed method eligibility.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
In recent machine learning community, there is a trend of constructing a linear logarithm version of
nonlinear version through the ‘kernel method’ for example kernel principal component analysis, kernel
fisher discriminant analysis, support Vector Machines (SVMs), and the current kernel clustering
algorithms. Typically, in unsupervised methods of clustering algorithms utilizing kernel method, a
nonlinear mapping is operated initially in order to map the data into a much higher space feature, and then
clustering is executed. A hitch of these kernel clustering algorithms is that the clustering prototype resides
in increased features specs of dimensions and therefore lack intuitive and clear descriptions without
utilizing added approximation of projection from the specs to the data as executed in the literature
presented. This paper aims to utilize the ‘kernel method’, a novel clustering algorithm, founded on the
conventional fuzzy clustering algorithm (FCM) is anticipated and known as kernel fuzzy c-means algorithm
(KFCM). This method embraces a novel kernel-induced metric in the space of data in order to interchange
the novel Euclidean matric norm in cluster prototype and fuzzy clustering algorithm still reside in the space
of data so that the results of clustering could be interpreted and reformulated in the spaces which are
original. This property is used for clustering incomplete data. Execution on supposed data illustrate that
KFCM has improved performance of clustering and stout as compare to other transformations of FCM for
clustering incomplete data.
A SIMPLE PROCESS TO SPEED UP MACHINE LEARNING METHODS: APPLICATION TO HIDDEN ...cscpconf
An intrinsic problem of classifiers based on machine learning (ML) methods is that their
learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model (HMM) training. The idea of the proposed process consists of two steps. The first one involves a preprocessing step to reduce the number of training instances with any information loss. In this step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. In the second step, all formulas in theclassical HMM training algorithm (EM) associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Operating Task Redistribution in Hyperconverged Networks IJECEIAES
In this article, a searching method for the rational task distribution through the nodes of a hyperconverged network is presented in which it provides the rational distribution of task sets towards a better performance. With using new subsettings related to distribution of nodes in the network based on distributed processing, we can minimize average packet delay. The distribution quality is provided with using a special objective function considering the penalties in the case of having delays. This process is considered in order to create the balanced delivery systems. The initial redistribution is determined based on the minimum penalty. After performing a cycle (iteration) of redistribution in order to have the appropriate task distribution, a potential system is formed for functional optimization. In each cycle of the redistribution, a rule for optimizing contour search is used. Thus, the obtained task distribution, including the appeared failure and success, will be rational and can decrease the average packet delay in the hyperconverged networks. The effectiveness of our proposed method is evaluated by using the model of hyperconverged support system of the university E-learning provided by V.N. Karazin Kharkiv National University. The simulation results based on the model clearly confirm the acceptable and better performance of our approach in comparison to the classical approach of task distribution.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Imageproc...sunda2011
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd
IEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore
WMNs: The Design and Analysis of Fair Schedulingiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
The wide usage of the Internet and the availability of powerful computers and high-speed networks as low cost
commodity components have a deep impact on the way we use computers today, in such a way that
these technologies facilitated the usage of multi-owner and geographically distributed resources to address
large-scale problems in many areas such as science, engineering, and commerce. The new paradigm of
Grid computing has evolved from these researches on these topics. Performance and utilization of the grid
depends on a complex and excessively dynamic procedure of optimally balancing the load among the
available nodes. In this paper, we suggest a novel two-dimensional figure of merit that depict the network
effects on load balance and fault tolerance estimation to improve the performance of the network
utilizations. The enhancement of fault tolerance is obtained by adaptively decrease replication time and
message cost. On the other hand, load balance is improved by adaptively decrease mean job response time.
Finally, analysis of Genetic Algorithm, Ant Colony Optimization, and Particle Swarm Optimization is
conducted with regards to their solutions, issues and improvements concerning load balancing in
computational grid. Consequently, a significant system utilization improvement was attained. Experimental
results eventually demonstrate that the proposed method's performance surpasses other methods.
Residual balanced attention network for real-time traffic scene semantic segm...IJECEIAES
Intelligent transportation systems (ITS) are among the most focused research in this century. Actually, autonomous driving provides very advanced tasks in terms of road safety monitoring which include identifying dangers on the road and protecting pedestrians. In the last few years, deep learning (DL) approaches and especially convolutional neural networks (CNNs) have been extensively used to solve ITS problems such as traffic scene semantic segmentation and traffic signs classification. Semantic segmentation is an important task that has been addressed in computer vision (CV). Indeed, traffic scene semantic segmentation using CNNs requires high precision with few computational resources to perceive and segment the scene in real-time. However, we often find related work focusing only on one aspect, the precision, or the number of computational parameters. In this regard, we propose RBANet, a robust and lightweight CNN which uses a new proposed balanced attention module, and a new proposed residual module. Afterward, we have simulated our proposed RBANet using three loss functions to get the best combination using only 0.74M parameters. The RBANet has been evaluated on CamVid, the most used dataset in semantic segmentation, and it has performed well in terms of parameters’ requirements and precision compared to related work.
Traffic-aware adaptive server load balancing for softwaredefined networks IJECEIAES
Servers in data center networks handle heterogeneous bulk loads. Load balancing, therefore, plays an important role in optimizing network bandwidth and minimizing response time. A complete knowledge of the current network status is needed to provide a stable load in the network. The process of network status catalog in a traditional network needs additional processing which increases complexity, whereas, in software defined networking, the control plane monitors the overall working of the network continuously. Hence it is decided to propose an efficient load balancing algorithm that adapts SDN. This paper proposes an efficient algorithm TAASLB-traffic-aware adaptive server load balancing to balance the flows to the servers in a data center network. It works based on two parameters, residual bandwidth, and server capacity. It detects the elephant flows and forwards them towards the optimal server where it can be processed quickly. It has been tested with the Mininet simulator and gave considerably better results compared to the existing server load balancing algorithms in the floodlight controller. After experimentation and analysis, it is understood that the method provides comparatively better results than the existing load balancing algorithms.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to Machine learning in Dynamic Adaptive Streaming over HTTP (DASH) (20)
Content-Based Image Retrieval (CBIR) systems have been used for the searching of relevant images in various research areas. In CBIR systems features such as shape, texture and color are used. The extraction of features is the main step on which the retrieval results depend. Color features in CBIR are used as in the color histogram, color moments, conventional color correlogram and color histogram. Color space selection is used to represent the information of color of the pixels of the query image. The shape is the basic characteristic of segmented regions of an image. Different methods are introduced for better retrieval using different shape representation techniques; earlier the global shape representations were used but with time moved towards local shape representations. The local shape is more related to the expressing of result instead of the method. Local shape features may be derived from the texture properties and the color derivatives. Texture features have been used for images of documents, segmentation-based recognition,and satellite images. Texture features are used in different CBIR systems along with color, shape, geometrical structure and sift features.
The cyber attacks have become most prevalent in the past few years. During this time, attackers have discovered new vulnerabilities to carry out malicious activities on the internet. Both the clients and the servers have been victimized by the attackers. Clickjacking is one of the attacks that have been adopted by the attackers to deceive the innocuous internet users to initiate some action. Clickjacking attack exploits one of the vulnerabilities existing in the web applications. This attack uses a technique that allows cross domain attacks with the help of userinitiated clicks and performs unintended actions. This paper traces out the vulnerabilities that make a website vulnerable to clickjacking attack and proposes a solution for the same.
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Eswar Publications
The audio and video synchronization plays an important role in speech recognition and multimedia communication. The audio-video sync is a quite significant problem in live video conferencing. It is due to use of various hardware components which introduces variable delay and software environments. The objective of the synchronization is used to preserve the temporal alignment between the audio and video signals. This paper proposes the audio-video synchronization using spreading codes delay measurement technique. The performance of the proposed method made on home database and achieves 99% synchronization efficiency. The audio-visual
signature technique provides a significant reduction in audio-video sync problems and the performance analysis of audio and video synchronization in an effective way. This paper also implements an audio- video synchronizer and analyses its performance in an efficient manner by synchronization efficiency, audio-video time drift and audio-video delay parameters. The simulation result is carried out using mat lab simulation tools and simulink. It is automatically estimating and correcting the timing relationship between the audio and video signals and maintaining the Quality of Service.
Due to the availability of complicated devices in industry, models for consumers at lower cost of resources are developed. Home Automation systems have been developed by several researchers. The limitations of home automation includes complexity in architecture, higher costs of the equipment, interface inflexibility. In this paper as we have proposed, the working protocol of PIC 16F72 technology is which is secure, cost efficient, flexible that leads to the development of efficient home automation systems. The system is operational to control various home appliances like fans, Bulbs, Tube light. The following paper describes about components used and working of all components connected. The home automation system makes use of Android app entitled “Home App” which gives
flexibility and easy to use GUI.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
Agriculture plays an important role in the economy of our country. Over 58 percent of the rural households depend on the agriculture sector as their means of livelihood. Agriculture is one of the major contributors to Gross Domestic Product(GDP). Seeds are the soul of agriculture. This application helps in reducing the time for the researchers as well as farmers to know the seedling parameters. The application helps the farmers to know about the percentage of seedlings that will grow and it is very essential in estimating the yield of that particular crop. Manual calculation may lead to some error, to minimize that error, the developed app is used. The scientist and farmers require the app to know about the physiological seed quality parameters and to take decisions regarding their farming activities. In this article a desktop app for seed germination percentage and vigour index calculation are developed in PHP scripting language.
What happens when adaptive video streaming players compete in time-varying ba...Eswar Publications
Competition among adaptive video streaming players severely diminishes user-QoE. When players compete at a bottleneck link many do not obtain adequate resources. This imbalance eventually causes ill effects such as screen flickering and video stalling. There have been many attempts in recent years to overcome some of these problems. However, added to the competition at the bottleneck link there is also the possibility of varying network bandwidth which can make the situation even worse. This work focuses on such a situation. It evaluates current heuristic adaptive video players at a bottleneck link with time-varying bandwidth conditions. Experimental setup includes the TAPAS player and emulated network conditions. The results show PANDA outperforms FESTIVE, ELASTIC and the Conventional players.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Spreading Trade Union Activities through Cyberspace: A Case StudyEswar Publications
This report present the outcome of an investigative research conducted to examine the modu-operandi of academic staff union of polytechnics (ASUP) YabaTech. The investigation covered the logistics and cost implication for spreading union activities among members. It was discovered that cost of management and dissemination of information to members was at high side, also logistics problem constitutes to loss of information in transit hence cut away some members from union activities. To curtail the problem identified, we proposed the
design of secure and dynamic website for spreading union activities among members and public. The proposed system was implemented using HTML5 technology, interface frameworks like Bootstrap and j query which enables the responsive feature of the application interface. The backend was designed using PHPMYSQL. It was discovered from the evaluation of the new system that cost of managing information has reduced considerably, and logistic problems identified in the old system has become a forgotten issue.
Identifying an Appropriate Model for Information Systems Integration in the O...Eswar Publications
Nowadays organizations are using information systems for optimizing processes in order to increase coordination and interoperability across the organizations. Since Oil and Gas Industry is one of the large industries in whole of the world, there is a need to compatibility of its Information Systems (IS) which consists three categories of systems: Field IS, Plant IS and Enterprise IS to create interoperability and approach the
optimizing processes as its result. In this paper we introduce the different models of information systems integration, identify the types of information systems that are using in the upstream and downstream sectors of petroleum industry, and finally based on expert’s opinions will identify a suitable model for information systems integration in this industry.
Link-and Node-Disjoint Evaluation of the Ad Hoc on Demand Multi-path Distance...Eswar Publications
This work illustrates the AOMDV routing protocol. Its ancestor, the AODV routing protocol is also described. This tutorial demonstrates how forward and reverse paths are created by the AOMDV routing protocol. Loop free paths formulation is described, together with node and link disjoint paths. Finally, the performance of the AOMDV routing protocol is investigated along link and node disjoint paths. The WSN with the AOMDV routing protocol using link disjoint paths is better than the WSN with the AOMDV routing protocol using node disjoint paths for energy consumption.
Bridging Centrality: Identifying Bridging Nodes in Transportation NetworkEswar Publications
To identify the importance of node of a network, several centralities are used. Majority of these centrality measures are dominated by components' degree due to their nature of looking at networks’ topology. We propose a centrality to identification model, bridging centrality, based on information flow and topological aspects. We apply bridging centrality on real world networks including the transportation network and show that the nodes distinguished by bridging centrality are well located on the connecting positions between highly connected regions. Bridging centrality can discriminate bridging nodes, the nodes with more information flowed through them and locations between highly connected regions, while other centrality measures cannot.
Now a days we are living in an era of Information Technology where each and every person has to become IT incumbent either intentionally or unintentionally. Technology plays a vital role in our day to day life since last few decades and somehow we all are depending on it in order to obtain maximum benefit and comfort. This new era equipped with latest advents of technology, enlightening world in the form of Internet of Things (IoT). Internet of things is such a specified and dignified domain which leads us to the real world scenarios where each object can perform some task while communicating with some other objects. The world with full of devices, sensors and other objects which will communicate and make human life far better and easier than ever. This paper provides an overview of current research work on IoT in terms of architecture, a technology used and applications. It also highlights all the issues related to technologies used for IoT, after the literature review of research work. The main purpose of this survey is to provide all the latest technologies, their corresponding
trends and details in the field of IoT in systematic manner. It will be helpful for further research.
Automatic Monitoring of Soil Moisture and Controlling of Irrigation SystemEswar Publications
In past couple of decades, there is immediate growth in field of agricultural technology. Utilization of proper method of irrigation by drip is very reasonable and proficient. A various drip irrigation methods have been proposed, but they have been found to be very luxurious and dense to use. The farmer has to maintain watch on irrigation schedule in the conventional drip irrigation system, which is different for different types of crops. In remotely monitored embedded system for irrigation purposes have become a new essential for farmer to accumulate his energy, time and money and will take place only when there will be requirement of water. In this approach, the soil test for chemical constituents, water content, and salinity and fertilizer requirement data collected by wireless and processed for better drip irrigation plan. This paper reviews different monitoring systems and proposes an automatic monitoring system model using Wireless Sensor Network (WSN) which helps the farmer to improve the yield.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
Impact of Technology on E-Banking; Cameroon PerspectivesEswar Publications
The financial services industry is experiencing rapid changes in services delivery and channels usage, and financial companies and users of financial services are looking at new technologies as they emerge and deciding whether or not to embrace them and the new opportunities to save and manage enormous time, cost and stress.
There is no doubt about the favourable and manifold impact of technology on e-banking as pictured in this review paper, almost all banks are with the least and most access e-banking Technological equipments like ATMs and Cards. On the other Hand cheap and readily available technology has opened a favourable competition in ebanking services business with a lot of wide range competitors competing with Commercial Banks in Cameroon in providing digital financial services.
Classification Algorithms with Attribute Selection: an evaluation study using...Eswar Publications
Attribute or feature selection plays an important role in the process of data mining. In general the data set contains more number of attributes. But in the process of effective classification not all attributes are relevant.
Attribute selection is a technique used to extract the ranking of attributes. Therefore, this paper presents a comparative evaluation study of classification algorithms before and after attribute selection using Waikato Environment for Knowledge Analysis (WEKA). The evaluation study concludes that the performance metrics of the classification algorithm, improves after performing attribute selection. This will reduce the work of processing irrelevant attributes.
Mining Frequent Patterns and Associations from the Smart meters using Bayesia...Eswar Publications
In today’s world migration of people from rural areas to urban areas is quite common. Health care services are one of the most challenging aspect that is must require to the people with abnormal health. Advancements in the technologies lead to build the smart homes, which contains various sensor or smart meter devices to automate the process of other electronic device. Additionally these smart meters can be able to capture the daily activities of the patients and also monitor the health conditions of the patients by mining the frequent patterns and
association rules generated from the smart meters. In this work we proposed a model that is able to monitor the activities of the patients in home and can send the daily activities to the corresponding doctor. We can extract the frequent patterns and association rules from the log data and can predict the health conditions of the patients and can give the suggestions according to the prediction. Our work is divided in to three stages. Firstly, we used to record the daily activities of the patient using a specific time period at three regular intervals. Secondly we applied the frequent pattern growth for extracting the association rules from the log file. Finally, we applied k means clustering for the input and applied Bayesian network model to predict the health behavior of the patient and precautions will be given accordingly.
Network as a Service Model in Cloud Authentication by HMAC AlgorithmEswar Publications
Resource pooling on internet-based accessing on use as pay environmental technology and ruled in IT field is the
cloud. Present, in every organization has trusted the web, however, the information must flow but not hold the
data. Therefore, all customers have to use the cloud. While the cloud progressing info by securing-protocols. Third
party observing and certain circumstances directly stale in flow and kept of packets in the virtual private cloud.
Global security statistics in the year 2017, hacking sensitive information in cloud approximately maybe 75.35%,
and the world security analyzer said this calculation maybe reached to 100%. For this cause, this proposed
research work concentrates on Authentication-Message-Digest-Key with authentication in routing the Network as
a Service of packets in OSPF (Open Shortest Path First) implementing Cloud with GNS3 has tested them to
securing from attackers.
Microstrip patch antennas are recently used in wireless detection applications due to their low power consumption, low cost, versatility, field excitation, ease of fabrication etc. The microstrip patch antennas are also called as printed antennas which is suffer with an array elements of antenna and narrow bandwidth. To overcome the above drawbacks, Flame Retardant Material is used as the substrate. Rectangular shape of microstrip patch antenna with FR4 material as the substrate which is more suitable for the explosive detection applications. The proposed printed antenna was designed with the dimension of 60 x 60 mm2. FR-4 material has a dielectric constant value of 4.3 with thickness 1.56 mm, length and width 60 mm and 60 mm respectively. One side of the substrate contains the ground plane of dimensions 60 x60 mm2 made of copper and the other side of the substrate contains the patch which have dimensions 34 x 29 mm2 and thickness 0.03mm which is also made of copper. RMPA without slot, Vertical slot RMPA, Double horizontal slot RMPA and Centre slot RMPA structures were
designed and the performance of the antennas were analysed with various parameters such as gain, directivity, Efield, VSWR and return loss. From the performance analysis, double horizontal slot RMPA antenna provides a better result and it provides maximum gain (8.61dB) and minimum return loss (-33.918dB). Based on the E-field excitation value the SEMTEX explosive material is detected and it was simulated using CST software.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Machine learning in Dynamic Adaptive Streaming over HTTP (DASH)
1. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3461-3468 (2017) ISSN: 0975-0290
3461
Machine learning in Dynamic Adaptive Streaming
over HTTP (DASH)
Koffka Khan
Department of Computing and Information Technology The University of the West Indies, Trinidad and Tobago, W.I
Email: koffka.khan@gmail.com
Wayne Goodridge
Department of Computing and Information Technology The University of the West Indies, Trinidad and Tobago, W.I
Email: wayne.goodridge@sta.uwi.edu.com
-------------------------------------------------------------------ABSTRACT---------------------------------------------------------------
Recently machine learning has been introduced into the area of adaptive video streaming. This paper explores a
novel taxonomy that includes six state of the art techniques of machine learning that have been applied to
Dynamic Adaptive Streaming over HTTP (DASH): (1) Q-learning, (2) Reinforcement learning, (3) Regression, (4)
Classification, (5) Decision Tree learning, and (6) Neural networks.
Keywords—Machine learning; DASH; Q-learning; Reinforcement learning; Regression; Classification; Decision
Tree learning; Neural networks
---------------------------------------------------------------------------------------------------------------------------------------------------
Date of Submission: Nov 17, 2017 Date of Acceptance: Dec 05, 2017
---------------------------------------------------------------------------------------------------------------------------------------------------
I. INTRODUCTION
The growing trend of the usage of video services
demands the availability of efficient and reliable video
quality assessment and transmission techniques. In a HAS
design, video content is stored on a server as segments of
fixed duration at different quality levels. Each client can
request the segment at the most appropriate quality level on
the basis of the local perceived bandwidth or its internal
buffer level. In this way, video playback dynamically
changes according to the available resources, resulting in a
smoother video streaming.
Typical goals that measure the goodness of a streaming
session take into account average quality levels displayed
to the user, fluctuations in quality levels, and rebuffering
events [21]. Unfortunately, there is no single agreed upon
measurement of streaming quality. The ultimate standard of
goodness might involve conducting comprehensive user
surveys under different network conditions and devices
[37]. Hence, the ability to optimize with respect to a wide
variety of objective functions that take these events into
account is of utmost importance.
Q-learning can also be viewed as a method of
asynchronous dynamic programming (DP). It provides
agents with the capability of learning to act optimally in
Markovian domains by experiencing the consequences of
actions, without requiring them to build maps of the
domains [35]. In Q-learning, if each subagent uses the
conventional Q-learning algorithm [39], global optimality
is not achieved. Instead, each subagent learns the Qj values
that would result if that subagent were to make all future
decisions for the agent. This “illusion of control” means
that the subagents converge to “selfish” estimates that
overestimate the true values of their own rewards with
respect to a globally optimal policy [24].
In adaptive streaming environments the concept of
reinforcement learning (RL) is at client side. The client is
able learn the most appropriate parameter configuration
under different network conditions. RL is a machine
learning technique, in which an agent can learn about its
environment by performing a number of actions. Every
time an action is taken, the agent perceives feedback
through a numerical reward from the environment [30].
The agent’s goal is to learn which action should be taken in
a given environmental state, in order to maximize the
cumulative numerical reward.
Regression is a simple, yet powerful technique for
identifying trends in datasets and predicting properties of
new data points. The output can be a continuous value, a
discrete value (e.g. a boolean), or a vector of such values.
The input is a vector, and can also be either discrete or
continuous. The individual properties of one data row are
called features. For discrete problems, threshold value(s)
is/are chosen between the various “levels” of output. The
phase-space surface defined by these levels is called the
decision boundary [14]. The prediction is done via a
hypothesis function. The “learning” is driven by a set of
examples.
Classification is the procedure of placing a new
observation into a set of categories (sub-categories), on the
basis of a training set of data, which contains observations
(or instances) whose category membership is known [18].
The classification model can be developed on the complete
case data for a given feature. This model treats the feature
as the outcome and uses the other features as predictors.
Feature subset selection is the process of identifying and
removing as many irrelevant and redundant features as
possible [12]. This reduces the dimensionality of the data
and enables data mining algorithms to operate faster and
more effectively.
The decision tree is defined as a supervised learning
model that hierarchically maps a data domain onto a
response set. It divides a data domain (node) recursively
into two sub domains such that the subdomains have a
higher information gain than the node that was split. The
goal of supervised learning is the classification of the data,
and therefore, the information gain means the ease of
classification in the subdomains created by a split. Finding
2. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3461-3468 (2017) ISSN: 0975-0290
3462
the best split that gives the maximum information gain (i.e.,
the ease of classification) is the goal of the optimization
algorithm in the decision tree-based supervised learning
[29].
Artificial neural networks can be employed to solve a
wide spectrum of problems in optimization, parallel
computing, matrix algebra and signal processing [6]. There
exist many training methods for both feedforward networks
(including multilayer and radial basis networks) and
recurrent networks. In addition to conjugate gradient and
Levenberg-Marquardt variations of the backpropagation
algorithm (BA), the Neural Network BAs also covers
Bayesian regularization and early stopping, which ensure
the generalization ability of trained networks [8]. Practical
uses of neural networks includes training for function
approximation, pattern recognition, clustering, prediction,
and of lately adaptive video streaming.
This work consists of four sections. Section II presents
the Taxonomy of Machine Learning-based techniques
currently applied to DASH. Section III gives examples of
these state of the art techniques that were defined in the
taxonomy. Finally, the conclusion is given in Section IV.
II. TAXONOMY OF MACHINE LEARNING-BASED
AVS
The Machine Learning-based AVS techniques are
categorized into (1) Q-Learning, (2) Reinforcement
Learning, (3) Regression, (4) Classification, (5) Decision
Tree Learning, and (6) Neural Networks. (cf. Figure 1)
Figure 1: Machine Learning in AVS: a taxonomy.
III. MACHINE-LEARNING IN DASH
A. Q-Learning
An adaptive Q-Learning-based HAS client is proposed in
[4]. The proposed HAS client dynamically learns the
optimal behavior corresponding to the current network
environment. This is in contrast to the way current
heuristics deal with the AVS problem. It considers (1)
multiple aspects of video quality, (2) a tunable reward
function which gives the opportunity to focus on different
aspects of the Quality of Experience (QoE), and (3) the
quality as perceived by the end-user. The proposed HAS
client is evaluated by a network-based simulator. It
investigates multiple reward configurations and Q-learning
specific settings. The proposed client outperforms standard
HAS players in the evaluated networking environments.
Figure 2: Schematic overview of the Q-learning HAS client
design. [4]
Authors in [31] use a novel learning model to determine
(a) the optimal time to re-route the traffic flows, and (b) to
change the bitrate of the video. To accomplish this task
they propose an adaptive video streaming system with a
learning-based approach. It runs over a Software Defined
Network (SDN) [11], [13]. In the proposed video
streaming system the learning model aims to minimize (1)
the packet loss rate, (2) quality changes, and (3) controller
cost. It concurrently adapts (i) the flow routes and (ii)
video quality. The performance of the learning-based
approach is tested by comparing it to traditional Internet
routing and the greedy approach. The proposed system
significantly outperforms the traditional Internet routing
approach and the greedy approach in experiments under
different network scenarios. The tests covered both
Quality of Experience (QoE) and network cost.
Figure 3: Network topology used in the simulations. There
are six clients, one server and one controller in the
network. The blue dotted lines indicate the OpenFlow
protocol and the red dashed line represents the
communication between the server and the controller. [31]
A (frequency adjusted) Q-learning HAS client is proposed
in [5]. The proposed HAS client dynamically learns the
3. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3461-3468 (2017) ISSN: 0975-0290
3463
optimal behaviour in terms of the Quality of Experience
(QoE) corresponding to the current network environment.
Thus, the client has been optimised both in terms of (1)
global performance and (2) convergence speed. In a wide
range of network configurations, thorough evaluations
show that the proposed client can outperform deterministic
algorithms by 11–18% in terms of mean opinion score
[28], [36] (MOS).
Figure 4: Overview of the simulated topology. [5]
Authors in [17] propose a Q-Learning-based algorithm for
an HTTP Adaptive Streaming (HAS). It maximizes the
perceived user quality. Account is taken into the
relationships amongst (a) the estimated bandwidth, (2) the
qualities, and (3) penalizing the freezes. Results show that
it produces an optimal control as observed with other
approaches do, but in addition it keeps the adaptiveness.
B. Reinforcement Learning
Authors in [22] propose a Reinforcement Learning-based
quality selection algorithm. It is able to achieve fairness in
a multi-client setting. A coordination proxy in charge of
facilitating the coordination among clients is a key
element of their approach. There are three advantages to
this approach: (1) the algorithm is able to learn and adapt
its policy depending on network conditions. This is unlike
recently developed HAS heuristics. (2) fairness is
achieved without explicit communication among agents.
Thus there is no significant overhead introduced into the
network. (3) There are no modifications to the standard
HAS architecture. This novel approach is evaluated
through simulations using (a) mutable network conditions,
and (b) in several multi-client scenarios. The proposed
approach improves fairness by up to 60% compared to
current HAS heuristics.
Figure 5: Logical work-flow of the proposed solution. [22]
An adaptive method that combines video encoding and the
video transmission control system is presented in [2]. The
study was implemented over heterogeneous networks. The
method includes the following steps: (1) collect and
standardize the real-time information describing the
network and the video, (2) assess the video quality, (3)
calculate the video coding rate based on the standardized
information, (4) process the encoded compression of the
video according to the calculated coding rate, and (5)
transfer the compressed video. It was shown by
experiments that there is a significant improvement for the
quality of real-time videos transmission. This is achieved
without changing the existing network. The advantages of
the solution are (1) it is easy to deploy. (2) easy to
implement quickly, and (3) may help to extensively ensure
video quality for normal users.
Figure 6: Proposed system framework. [2]
A reinforcement learning approach that uses only the
buffer state information to choose the segment quality
during playback is proposed in [44]. This approach
optimizes for a measure of user-perceived streaming
quality. Results are simulated and show that the proposed
approach achieves better QoE than rate-based, buffer-
based, and reinforcement learning-based approaches.
The concept of reinforcement learning is introduced at
client side in [32]. It allows the player to adaptively
change the parameter configuration for existing rate
adaptation heuristics. Characteristics that are taken into
account in the decision process are bandwidth-based. This
allows the player to improve its bandwidth-awareness. The
goals of the model are (1) to actively reduce the average
buffer filling, and (2) to evaluate the results for two well-
known heuristics: (a) the Microsoft IIS Smooth Streaming
heuristic [43], and (2) the QoE-driven Rate Adaptation
Heuristic for Adaptive video Streaming [22]. The average
buffer filling can be reduced by 8.3% compared to state of
the art by using the proposed learning-based approach.
This reduction is achieved while achieving a comparable
level of QoE.
Authors in [16] develop Pensieve. Pensieve is a system
that generates ABR algorithms. It only uses
Reinforcement Learning (RL) techniques. Pensieve uses
RL to train a neural network model. The model selects
bitrates for future video chunks. This selection is based on
observations collected by client video players. Pensieve
does not rely upon pre-programmed models or
assumptions about the environment, which makes it
different from other approaches. Instead, it learns to make
ABR decisions solely through observations. These
observations are as a result of the performance of past
decisions made by the controller. Consequently, Pensieve
can automatically allow ABR algorithms that adapt to (1)
4. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3461-3468 (2017) ISSN: 0975-0290
3464
a wide range of environmental conditions, and (2) QoE
metrics. Pensieve is compared to state-of-the-art ABR
algorithms. Pensive uses (a) trace-driven experiments, (b)
real world experiments ((a) and (b) span a wide variety of
network conditions), (c) QoE metrics, and (d) video
properties. Pensieve outperforms the best state of the art
scheme in all considered scenarios. It obtains
improvements in average QoE of 13.1%-25.0%. Pensieve's
policies generalize well. It outperforms existing schemes.
This is so even on networks on which it was not trained.
Figure 7: The Actor-Critic algorithm that Pensieve uses to
generate
ABR policies. [16]
Authors in [41] discuss the use of reinforcement-learning
(RL) to learn the optimal request strategy at the HAS
client. It progressively maximizing a pre-defined Quality
of Experience (QoE)-related reward function. The most
influential factors for the request strategy are investigated
under a RL framework. The strategy uses a forward
variable selection algorithm. The performance of the RL-
based HAS client is evaluated. It is done on a Video-on-
Demand (VOD) simulation system. Results are gathered
from experiments. The RL-based HAS client is able to
optimize the quantitative QoE given the QoE-related
reward function. The RL-based HAS client is more robust
and flexible compared to a conventional HAS system,
even under versatile network conditions.
C. Regression
In is shown in [15] that static users and adaptive streaming
users have less starvation events than mobile users. Also
researchers observed that with mobile users it is more
difficult to predict their video starvation. It was shown that
(1) channel conditions, and (2) number of active users are
two important features which contribute to better
prediction performance. The two features prediction
strategy provides sufficient accuracy for static users.
However, but not sufficient for mobile users. We also
demonstrate that the two information, number of users
served in a cell and the number of users experiencing
video starvation, provide similar prediction accuracy.
In the proposed method [19], authors use the QoS
parameters for the video streaming. These parameters are
determined based on a machine learning algorithm. It uses
regression analysis. The parameters used for the model are
(a) according to the user requirements, (b)
computational/network resources, and (c) service
provisioning environments. The design and
implementation of a prototype shows the method is
experimentally valid and feasible.
Figure 8: The Video quality determining mechanism with
machine learning. [19]
Authors in [27] investigate quality prediction models for
videos. They use (1) well-known image metrics, (2)
information about the bitrate levels, and (3) a relatively
simple machine learning method. Even though quality
assessment of ABR videos is a hard problem, the research
shows that initial results are promising. A Spearman rank
order correlation of 0.88 is obtained using content-
independent cross-validation.
A no-reference objective quality estimation framework is
proposed in [26]. The framework is suitable for any block-
based video codec. In the proposed solution, features are
extracted from coding units. They are then summarized to
form features at frame levels. Stepwise regression is used
to select the important feature variables. This reduces the
dimensionality of feature vectors. Subsequently, a
polynomial regression-based approach is used to model
the nonlinear relationship amid (a) the feature vectors and
(b) the true objective quality values. Such values are likely
for coding units and video frames. The proposed
framework is implemented using MPEG-2 and HEVC.
The objective quality estimation results are compared
against an existing state-of-the-art solution. It is then
quantified using the Pearson correlation factor and the root
mean square error measure.
D. Classification
Authors in [23] propose a network-based framework. A
network controller prioritizes the delivery of particular
video segments. This is done to prevent freezes at the
clients. This framework is based on OpenFlow. OpenFlow
is a widely adopted protocol to implement the software-
defined networking (SDN) principle. The main element of
the controller is a Machine Learning (ML) engine. This
engine is based on the random undersampling boosting
algorithm and fuzzy logic. It can detect when a client is
close to a freeze and drive the network prioritization to
avoid it. This decision is based on measurements collected
from the network nodes only. The engine does not have
any knowledge on (1) the streamed videos or (2) on the
5. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3461-3468 (2017) ISSN: 0975-0290
3465
clients' characteristics. The design of the proposed ML-
based framework is discussed and illustrated. In addition,
its performance with other benchmarking HAS solutions is
compared. This is undertaken with various video
streaming scenarios. Principally, through extensive
experimentation it is shown that the proposed approach
can reduce video freezes and freeze time with about 65%
and 45% respectively.
A Machine Learning-based Adaptive Streaming over
HTTP (MLASH) is presented in [3]. It is an elastic
framework and exploits a wide range of useful network-
related features to train a rate classification model.
MLASH allows its machine learning-based framework to
be incorporated with any existing adaptation algorithm and
utilize big data characteristics. This would improve a
model’s prediction accuracy. Trace-based simulations
show that machine learning-based adaptation achieves a
better performance than traditional adaptation algorithms.
Performance is measured in terms of a players target
quality of experience (QoE) metrics.
Figure 9: System Architecture ofMLASH. [3]
Authors in [20] present a methodology for the
classification of end users’ QoE. This is for users watching
YouTube videos, and is based only on statistical properties
of encrypted network traffic. The developed system is
called YouQ. It includes (1) tools for monitoring, (2)
analysis of application-level quality indicators, and (3)
corresponding traffic traces. Collected data is then used for
the development of machine learning models for QoE
classification. This is based on computed traffic features
per video session. A collected dataset corresponding to
1060 different YouTube videos streamed across 39
different bandwidth scenarios is used to test the YouQ
system and methodology. Various classification models
are also tested. Classification accuracy was found to be up
to 84%. Three QoE classes are used (a) “low”, (b)
“medium” or (c) “high”. Up to 91% classification
accuracy is obtained when using binary classification
classes (i) “low” and (2) “high”. Why and when prediction
errors occur are discussed in an attempt to improve the
models in the future.
E. Decision Tree Learning
Authors in [1] tackles the problem of QoE monitoring,
assessment and prediction in cellular networks, relying on
end-user device (i.e., smart-phone) QoS passive traffic
measurements and QoE crowdsourced feedback. We
conceive different QoE assessment models based on
supervised machine learning techniques, which are
capable to predict the QoE experienced by the end user of
popular smartphone apps (e.g., YouTube and Facebook),
using as input the passive in-device measurements. Using
a rich QoE dataset derived from field trials in operational
cellular networks, we benchmark the performance of
multiple machine learning based predictors, and construct
a decision-tree based model which is capable to predict the
per-user overall experience and service acceptability with
a success rate of 91% and 98% respectively To the best of
our knowledge, this is the first paper using end-user, in-
device passive measurements and machine learning
models to predict the QoE of smartphone users in
operational cellular networks.
Authors in [7] have used the Institut National de la
Recherche Scientifique (INRS) audiovisual quality dataset
which is specifically designed to include ranges of
parameters and degradations typically seen in real-time
communications to build Decision Trees--based Ensemble
(DTE) methods. It is shown that DTEs have outperformed
both Deep Learning-based and Genetic Programming-
based models. This is in terms of the (a) Root-Mean-
Square Error (RMSE), and (b) Pearson correlation values.
It was shown through other conducted experiment that
Random Forests-based prediction models achieve high
accuracy for both the INRS audiovisual quality dataset and
other publicly available comparable datasets.
F. Neural Networks
A novel automated and computationally efficient video
assessment method is introduced in [34]. The method
enables accurate real-time (online) analysis of delivered
quality in an adaptable and scalable manner. Offline deep
unsupervised learning processes are employed at the
server side. The method is inexpensive for the client as
there is a no-reference measurements at the client side. It
provides both real-time assessment and performance
comparable to the full reference counterpart at the client
side. The method is tested on the LIMP Video Quality
Database [33]. It obtains a correlation between 78% and
91% to the FR benchmark [38]. The method is flexible,
dynamically adaptable to new content and is scalable with
the number of videos because of its unsupervised learning
essence.
Figure 10: Real-time UDL-based video quality assessment
method. [34]
Authors in [10] present D-DASH. This is a framework that
combines Deep Learning and Reinforcement Learning
techniques. It optimizes the Quality of Experience (QoE)
of DASH. Authors propose different learning
architectures. They are assessed, combined with feed-
6. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3461-3468 (2017) ISSN: 0975-0290
3466
forward and recurrent deep neural networks and
incorporates advanced strategies. D-DASH designs are
thoroughly evaluated against state-of-the-art, heuristic and
learning-based algorithms. Performance indicators such as
image quality across video segments and
freezing/rebuffering events are used.
Results are obtained on both real and simulated channel
traces. They show the superiority of D-DASH in nearly all
the considered quality metrics. The D-DASH framework
exhibits faster convergence to the rate-selection strategy
than the other learning algorithms considered in the study
and yields a considerably higher QoE.
Figure 11: Network architectures. [10]
Authors in [25] present bitstream-based features for
perceptual quality estimation of HEVC coded videos.
Various factors are used including: (1) the impact of
different sizes of block-partitions, (2) use of reference-
frames, (3) the relative amount of various prediction
modes, (4) statistics of motion vectors and, (5)
quantization parameters are taken into consideration. The
result are 52 features relevant for perceptual quality
prediction being produced. The used test stimuli
constitutes 560 bitstreams. These have been carefully
extracted for this analysis from the 59, 520 bistreams of
the large-scale database generated by the Joint Effort
Group (JEG) [9]. The significance of the considered
features through reasonably accurate and monotonic
prediction of a number of objective quality metrics is
highlighted.
Authors in [42] propose a novel no-reference (NR) video
quality metric that evaluates the impact of frame freezing
due to either packet loss or late arrival. The metric uses a
trained neural network (NN). The NN acts on features that
are chosen to capture the impact of frame freezing on the
user perceived quality. The considered features include (a)
the number of freezes, (b) freeze duration statistics, (c)
inter-freeze distance statistics, (d) frame difference before
and after the freeze, (e) normal frame difference, and (f)
the ratio of them. The NN finds the mapping between
features and subjective test scores. It optimizes the
network structure and the feature selection. This is done
through a cross-validation procedure. The cross-validation
uses training samples extracted from both VQEG [40] and
LIVE video databases. The resulting feature set and
network structure gives precise quality prediction for both
the training data containing 54 test videos and an
unconnected testing dataset including 14 videos, with
Pearson correlation coefficients greater than 0.9 and 0.8
for the training set and the testing set, respectively.
Figure 12: Flowchart of the proposed metric. [42]
IV. CONCLUSION
Recently machine learning has been introduced into the
area of adaptive video streaming. This paper explores a
novel taxonomy that includes six state of the art techniques
of machine learning that have been applied to Dynamic
Adaptive Streaming over HTTP (DASH): (1) Q-learning,
(2) Reinforcement learning, (3) Regression, (4)
Classification, (5) Decision Tree learning, and (6) Neural
networks.
REFERENCES
[1] Casas, Pedro, Alessandro D'Alconzo, Florian
Wamser, Michael Seufert, Bruno Gardlo, Anika
Schwind, Phuoc Tran-Gia, and Raimund Schatz.
"Predicting QoE in cellular networks using machine
learning and in-smartphone measurements." In
Quality of Multimedia Experience (QoMEX), 2017
Ninth International Conference on, pp. 1-6. IEEE,
2017.
[2] Cheng, Bo, Jialin Yang, Shangguang Wang, and
Junliang Chen. "Adaptive video transmission control
system based on reinforcement learning approach
over heterogeneous networks." IEEE Transactions on
Automation Science and Engineering 12, no. 3
(2015): 1104-1113.
[3] Chien, Yu-Lin, Kate Ching-Ju Lin, and Ming-Syan
Chen. "Machine learning based rate adaptation with
elastic feature selection for HTTP-based streaming."
In Multimedia and Expo (ICME), 2015 IEEE
International Conference on, pp. 1-6. IEEE, 2015.
[4] Claeys, Maxim, Steven Latré, Jeroen Famaey,
Tingyao Wu, Werner Van Leekwijck, and Filip De
Turck. "Design of a Q-learning-based client quality
selection algorithm for HTTP adaptive video
streaming." In Adaptive and Learning Agents
Workshop, part of AAMAS2013 (ALA-2013), pp. 30-
37. 2013.
[5] Claeys, Maxim, Steven Latré, Jeroen Famaey,
Tingyao Wu, Werner Van Leekwijck, and Filip De
Turck. "Design and optimisation of a (FA) Q-
learning-based HTTP adaptive streaming client."
Connection Science 26, no. 1 (2014): 25-43.
[6] Cochocki, A., and Rolf Unbehauen. Neural networks
for optimization and signal processing. John Wiley &
Sons, Inc., 1993.
[7] Demirbilek, Edip, and Jean-Charles Grégoire.
"Machine Learning--Based Parametric Audiovisual
Quality Prediction Models for Real-Time
Communications." ACM Transactions on Multimedia
Computing, Communications, and Applications
(TOMM) 13, no. 2 (2017): 16.
7. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3461-3468 (2017) ISSN: 0975-0290
3467
[8] Demuth, Howard B., Mark H. Beale, Orlando De Jess,
and Martin T. Hagan. Neural network design. Martin
Hagan, 2014.
[9] Fliegel, Karel. "QUALINET Multimedia Databases
v4. 6." (2013).
[10]Gadaleta, Matteo, Federico Chiariotti, Michele Rossi,
and Andrea Zanella. "D-DASH: a Deep Q-learning
Framework for DASH Video Streaming." IEEE
Transactions on Cognitive Communications and
Networking (2017).
[11]Kirkpatrick, Keith. "Software-defined networking."
Communications of the ACM 56, no. 9 (2013): 16-19.
[12]Kotsiantis, Sotiris B., Ioannis D. Zaharakis, and
Panayiotis E. Pintelas. "Machine learning: a review of
classification and combining techniques." Artificial
Intelligence Review 26, no. 3 (2006): 159-190.
[13]Kreutz, Diego, Fernando MV Ramos, Paulo Esteves
Verissimo, Christian Esteve Rothenberg, Siamak
Azodolmolky, and Steve Uhlig. "Software-defined
networking: A comprehensive survey." Proceedings
of the IEEE 103, no. 1 (2015): 14-76.
[14]Kutner, Michael H., Chris Nachtsheim, and John
Neter. Applied linear regression models. McGraw-
Hill/Irwin, 2004.
[15]Lin, Yu-Ting, Eduardo Mucelli Rezende Oliveira,
Sana Ben Jemaa, and Salah Eddine Elayoubi.
"Machine learning for predicting QoE of video
streaming in mobile networks." In Communications
(ICC), 2017 IEEE International Conference on, pp. 1-
6. IEEE, 2017.
[16]Mao, Hongzi. "Neural adaptive video streaming with
pensieve." PhD diss., Massachusetts Institute of
Technology, 2017.
[17]Martín, Virginia, Julián Cabrera, and Narciso García.
"Evaluation of q-learning approach for HTTP
adaptive streaming." In Consumer Electronics
(ICCE), 2016 IEEE International Conference on, pp.
293-294. IEEE, 2016.
[18]Michie, Donald, David J. Spiegelhalter, and Charles
C. Taylor. "Machine learning, neural and statistical
classification." (1994).
[19]Oide, Makoto, Akiko Takahashi, Toru Abe, and
Takuo Suganuma. "Design and implementation of
user-oriented video streaming service based on
machine learning." In Cognitive Informatics &
Cognitive Computing (ICCI* CC), 2016 IEEE 15th
International Conference on, pp. 111-116. IEEE,
2016.
[20]Orsolic, Irena, Dario Pevec, Mirko Suznjevic, and Lea
Skorin-Kapov. "A machine learning approach to
classifying YouTube QoE based on encrypted
network traffic." Multimedia Tools and Applications
(2017): 1-35.
[21]Oyman, Ozgur, and Sarabjot Singh. "Quality of
experience for HTTP adaptive streaming services."
IEEE Communications Magazine 50, no. 4 (2012).
[22]Petrangeli, Stefano, Maxim Claeys, Steven Latré,
Jeroen Famaey, and Filip De Turck. "A multi-agent
Q-Learning-based framework for achieving fairness
in HTTP Adaptive Streaming." In Network
Operations and Management Symposium (NOMS),
2014 IEEE, pp. 1-9. IEEE, 2014.
[23]Petrangeli, Stefano, Tingyao Wu, Tim Wauters,
Rafael Huysegems, Tom Bostoen, and Filip De Turck.
"A machine learning-based framework for preventing
video freezes in HTTP adaptive streaming." Journal of
Network and Computer Applications 94 (2017): 78-
92.
[24]Russell, Stuart J., and Andrew Zimdars. "Q-
decomposition for reinforcement learning agents." In
Proceedings of the 20th International Conference on
Machine Learning (ICML-03), pp. 656-663. 2003.
[25]Shahid, Muhammad, Joanna Panasiuk, Glenn Van
Wallendael, Marcus Barkowsky, and Benny
Lövström. "Predicting full-reference video quality
measures using HEVC bitstream-based no-reference
features." In Quality of Multimedia Experience
(QoMEX), 2015 Seventh International Workshop on,
pp. 1-2. IEEE, 2015.
[26]Shanableh, Tamer. "A regression-based framework
for estimating the objective quality of HEVC coding
units and video frames." Signal Processing: Image
Communication 34 (2015): 22-31.
[27]Søgaard, Jacob, Søren Forchhammer, and Kjell
Brunnström. "Quality assessment of adaptive bitrate
videos using image metrics and machine learning." In
Quality of Multimedia Experience (QoMEX), 2015
Seventh International Workshop on, pp. 1-2. IEEE,
2015.
[28]Streijl, Robert C., Stefan Winkler, and David S.
Hands. "Mean opinion score (MOS) revisited:
methods and applications, limitations and
alternatives." Multimedia Systems 22, no. 2 (2016):
213-227.
[29]Suthaharan, Shan. "Decision tree learning." In
Machine Learning Models and Algorithms for Big
Data Classification, pp. 237-269. Springer US, 2016.
[30]Sutton, Richard S., and Andrew G. Barto.
Reinforcement learning: An introduction. Vol. 1, no.
1. Cambridge: MIT press, 1998.
[31]Uzakgider, Tuba, Cihat Cetinkaya, and Muge Sayit.
"Learning-based approach for layered adaptive video
streaming over SDN." Computer Networks 92 (2015):
357-368.
[32]van der Hooft, Jeroen, Stefano Petrangeli, Maxim
Claeys, Jeroen Famaey, and Filip De Turck. "A
learning-based algorithm for improved bandwidth-
awareness of adaptive streaming clients." In
Integrated Network Management (IM), 2015
IFIP/IEEE International Symposium on, pp. 131-138.
IEEE, 2015.
[33]Vega, M. Torres, and A. Liotta. "LIMP-video quality
database." (2016).
[34]Vega, Maria Torres, Decebal Constantin Mocanu,
Jeroen Famaey, Stavros Stavrou, and Antonio Liotta.
"Deep Learning for Quality Assessment in Live Video
Streaming." IEEE Signal Processing Letters 24, no. 6
(2017): 736-740.
8. Int. J. Advanced Networking and Applications
Volume: 09 Issue: 03 Pages: 3461-3468 (2017) ISSN: 0975-0290
3468
[35]Vidal, José M. "Learning in multiagent systems: An
introduction from a game-theoretic perspective." In
Adaptive agents and multi-agent systems, pp. 202-
215. Springer, Berlin, Heidelberg, 2003.
[36]Viswanathan, Mahesh, and Madhubalan Viswanathan.
"Measuring speech quality for text-to-speech systems:
development and assessment of a modified mean
opinion score (MOS) scale." Computer Speech &
Language 19, no. 1 (2005): 55-83.
[37]Wang, Zhiheng, Sujata Banerjee, and Sugih Jamin.
"Studying streaming video quality: from an
application point of view." In Proceedings of the
eleventh ACM international conference on
Multimedia, pp. 327-330. ACM, 2003.
[38]Wang, Zhou, Ligang Lu, and Alan C. Bovik. "Video
quality assessment based on structural distortion
measurement.” Signal processing: Image
communication 19, no.2(2004):121-132.
[39]Watkins, Christopher JCH, and Peter Dayan. "Q-
learning." Machine learning 8, no. 3-4 (1992): 279-
292.
[40]Winkler, Stefan. “Analysis of public image and video
database for quality assessment.” IEEE Journal of
selected topics in Signal Processing 6, no. 6 (2012):
616-625
[41]Wu, Tingyao, and Wemer Van Leekwijck. “Factor
selection for reinforcement learning in http adaptive
streaming.” In International Conference on
Multimedia Modeling, pp.553-567. Springer, Cham,
2014.
[42]Xue, Yuanyi, Beril Erkin, and Yao Wang. "A novel
no-reference video quality metric for evaluating
temporal jerkiness due to frame freezing." IEEE
Transactions on Multimedia 17, no.1 (2015):134-139.
[43]Zambelli, Alex. "IIS smooth streaming technical
overview." Microsoft Corporation 3 (2009):40.
[44]Zhang, Yue, and Yao Liu. "Buffer-Based
Reinforcement Learning for Adaptive Streaming." In
Distributed Computing Systems(ICDCS), 2017 IEEE
37th
International Conference on, pp.2569-2570. IEEE,
2017.
Biographies and Photographs
Koffka Khan received the M.Sc., and
M.Phil. degree from the University of the
West Indies. He is currently a PhD student
and has up-to-date, published numerous
papers in Journals and proceedings of
international repute. His research areas are computational
intelligence, routing protocols, wireless communications,
information security and adaptive streaming controllers.
Wayne Goodridge is a Lecturer in the
Department of Computing and Information
Technology, The University of the West
Indies, St. Augustine. He did is PhD at
Dalhousie University and his research interest includes
computer communications and security.