Outlier detection (OD) is widely used in many fields, such as finance, information and medicine, in cleaning up datasets and keeping the useful information. In a traffic system, it alerts the transport department and drivers with abnormal traffic situations such as congestion and traffic accident. This paper presents a density-based bounded LOF (BLOF) method for large-scale traffic video data in Hong Kong. A dimension reduction by principal component analysis (PCA) was accomplished on the spatial-temporal traffic signals. Previously, a density-based local outlier factor (LOF) method on a two-dimensional (2D) PCAproceeded spatial plane was performed. In this paper, a threedimensional (3D) PCA-proceeded spatial space for the classical density-based OD is firstly compared with the results from the 2D counterpart. In our experiments, the classical density-based LOF OD has been applied to the 3D PCA-proceeded data domain, which is new in literature, and compared to the previous 2D domain. The average DSRs has increased about 2% in the PM sessions: 91% (2D) and 93% (3D). Also, comparing the classical density-based LOF and the new BLOF OD methods, the average DSRs in the supervised approach has increased from 94% (LOF) to 96% (BLOF) for the AM sessions and from 93% (LOF) to 95% (BLOF) for the PM sessions.
EffectiveOcclusion Handling for Fast Correlation Filter-based TrackersEECJOURNAL
Β
Correlation filter-based trackers heavily suffer from the problem of multiple peaks in their response maps incurred by occlusions. Moreover, the whole tracking pipeline may break down due to the uncertainties brought by shifting among peaks, which will further lead to the degraded correlation filter model. To alleviate the drift problem caused by occlusions, we propose a novel scheme to choose the specific filter model according to different scenarios. Specifically, an effective measurement function is designed to evaluate the quality of filter response. A sophisticated strategy is employed to judge whether occlusions occur, and then decide how to update the filter models. In addition, we take advantage of both log-polar method and pyramid-like approach to estimate the best scale of the target. We evaluate our proposed approach on VOT2018 challenge and OTB100 dataset, whose experimental result shows that the proposed tracker achieves the promising performance compared against the state-of-the-art trackers.
This paper proposes algorithms for dynamic travel time prediction to provide reliable real-time
travel time information using probe travel time data collected by a dedicated short range communication (DSRC)
system. The travel time predictions were performed using arrival-time-based travel time; subsequently, the
accuracy of these predictions was evaluated using the concurrent departure-time-based travel time data, which
were also collected by the DSRC system. The prediction methodologies proposed in this research include the
Kalman filter and a newly developed algorithm that uses weighting factors according to probe sample size. An
evaluation of the performance of the two algorithms showed their errors ranged from 5 to 7%, thereby showing
satisfactory results. Considering the fact that the Kalman filter requires historical travel time for prediction, the
similarity between the historical and current data is core factor for reliable travel time prediction. On the other
hand, the newly developed algorithm does not need historical data, thereby the benefit could be enhanced
especially when historical travel time data analogous to current ones are not easily available.
A Neural Network Approach to Identify Hyperspectral Image Content IJECEIAES
Β
A Hyperspectral is the imaging technique that contains very large dimension data with the hundreds of channels. Meanwhile, the Hyperspectral Images (HISs) delivers the complete knowledge of imaging; therefore applying a classification algorithm is very important tool for practical uses. The HSIs are always having a large number of correlated and redundant feature, which causes the decrement in the classification accuracy; moreover, the features redundancy come up with some extra burden of computation that without adding any beneficial information to the classification accuracy. In this study, an unsupervised based Band Selection Algorithm (BSA) is considered with the Linear Projection (LP) that depends upon the metric-band similarities. Afterwards Monogenetic Binary Feature (MBF) has consider to perform the βtexture analysisβ of the HSI, where three operational component represents the monogenetic signal such as; phase, amplitude and orientation. In post processing classification stage, feature-mapping function can provide important information, which help to adopt the Kernel based Neural Network (KNN) to optimize the generalization ability. However, an alternative method of multiclass application can be adopt through KNN, if we consider the multi-output nodes instead of taking single-output node.
In this paper, we present a new road traffic monitoring approach for a highway control and management system called RoadGuard. This system copes with several challenges in this type of applications in order to count and track road objects robustly. It adopts a novel approach for tracking road objects, first, to determine continuously their positions on the road, and then it uses vehicle positions to estimate their trajectories. The trajectory analysis provides vital information to control and manage highway traffic. In this paper, our main contribution is a tracking method based on a coherent strategy where both region and object information are used to establish objects correspondence over time. Our method operates in two
phases: a spatial analysis that uses a multilevel region descriptors matching in order to identify object interactions and particular object states; and a continuous temporal analysis applied to cope with track management issues. As demonstrated experimentally, the proposed method can detect, track and count road objects accurately in highway videos that include several constraints. In addition, it produces effective and stable road objects tracking.
National Flags Recognition Based on Principal Component Analysisijtsrd
Β
Recognizing an unknown flag in a scene is challenging due to the diversity of the data and to the complexity of the identification process. And flags are associated with geographical regions, countries and nations. But flag identification of different countries is a challenging and difficult task. Recognition of an unknown flag image in a scene is challenging due to the diversity of the data and to the complexity of the identification process. The aim of the study is to propose a feature extraction based recognition system for Myanmar's national flag. Image features are acquired from the region and state of flags which are identified by using principal component analysis PCA . PCA is a statistical approach used for reducing the number of features in National flags recognition system. Soe Moe Myint | Moe Moe Myint | Aye Aye Cho "National Flags Recognition Based on Principal Component Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26775.pdfPaper URL: https://www.ijtsrd.com/other-scientific-research-area/other/26775/national-flags-recognition-based-on-principal-component-analysis/soe-moe-myint
Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based M...ijma
Β
In the process of content-based multimedia retrieval, multimedia information is processed in order to
obtain descriptive features. Descriptive representation of features, results in a huge feature count, which
results in processing overhead. To reduce this descriptive feature overhead, various approaches have been
used to dimensional reduction, among which PCA and LDA are the most used methods. However, these
methods do not reflect the significance of feature content in terms of inter-relation among all dataset
features. To achieve a dimension reduction based on histogram transformation, features with low
significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to
achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark
dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in
comparison to the conventional system.
EffectiveOcclusion Handling for Fast Correlation Filter-based TrackersEECJOURNAL
Β
Correlation filter-based trackers heavily suffer from the problem of multiple peaks in their response maps incurred by occlusions. Moreover, the whole tracking pipeline may break down due to the uncertainties brought by shifting among peaks, which will further lead to the degraded correlation filter model. To alleviate the drift problem caused by occlusions, we propose a novel scheme to choose the specific filter model according to different scenarios. Specifically, an effective measurement function is designed to evaluate the quality of filter response. A sophisticated strategy is employed to judge whether occlusions occur, and then decide how to update the filter models. In addition, we take advantage of both log-polar method and pyramid-like approach to estimate the best scale of the target. We evaluate our proposed approach on VOT2018 challenge and OTB100 dataset, whose experimental result shows that the proposed tracker achieves the promising performance compared against the state-of-the-art trackers.
This paper proposes algorithms for dynamic travel time prediction to provide reliable real-time
travel time information using probe travel time data collected by a dedicated short range communication (DSRC)
system. The travel time predictions were performed using arrival-time-based travel time; subsequently, the
accuracy of these predictions was evaluated using the concurrent departure-time-based travel time data, which
were also collected by the DSRC system. The prediction methodologies proposed in this research include the
Kalman filter and a newly developed algorithm that uses weighting factors according to probe sample size. An
evaluation of the performance of the two algorithms showed their errors ranged from 5 to 7%, thereby showing
satisfactory results. Considering the fact that the Kalman filter requires historical travel time for prediction, the
similarity between the historical and current data is core factor for reliable travel time prediction. On the other
hand, the newly developed algorithm does not need historical data, thereby the benefit could be enhanced
especially when historical travel time data analogous to current ones are not easily available.
A Neural Network Approach to Identify Hyperspectral Image Content IJECEIAES
Β
A Hyperspectral is the imaging technique that contains very large dimension data with the hundreds of channels. Meanwhile, the Hyperspectral Images (HISs) delivers the complete knowledge of imaging; therefore applying a classification algorithm is very important tool for practical uses. The HSIs are always having a large number of correlated and redundant feature, which causes the decrement in the classification accuracy; moreover, the features redundancy come up with some extra burden of computation that without adding any beneficial information to the classification accuracy. In this study, an unsupervised based Band Selection Algorithm (BSA) is considered with the Linear Projection (LP) that depends upon the metric-band similarities. Afterwards Monogenetic Binary Feature (MBF) has consider to perform the βtexture analysisβ of the HSI, where three operational component represents the monogenetic signal such as; phase, amplitude and orientation. In post processing classification stage, feature-mapping function can provide important information, which help to adopt the Kernel based Neural Network (KNN) to optimize the generalization ability. However, an alternative method of multiclass application can be adopt through KNN, if we consider the multi-output nodes instead of taking single-output node.
In this paper, we present a new road traffic monitoring approach for a highway control and management system called RoadGuard. This system copes with several challenges in this type of applications in order to count and track road objects robustly. It adopts a novel approach for tracking road objects, first, to determine continuously their positions on the road, and then it uses vehicle positions to estimate their trajectories. The trajectory analysis provides vital information to control and manage highway traffic. In this paper, our main contribution is a tracking method based on a coherent strategy where both region and object information are used to establish objects correspondence over time. Our method operates in two
phases: a spatial analysis that uses a multilevel region descriptors matching in order to identify object interactions and particular object states; and a continuous temporal analysis applied to cope with track management issues. As demonstrated experimentally, the proposed method can detect, track and count road objects accurately in highway videos that include several constraints. In addition, it produces effective and stable road objects tracking.
National Flags Recognition Based on Principal Component Analysisijtsrd
Β
Recognizing an unknown flag in a scene is challenging due to the diversity of the data and to the complexity of the identification process. And flags are associated with geographical regions, countries and nations. But flag identification of different countries is a challenging and difficult task. Recognition of an unknown flag image in a scene is challenging due to the diversity of the data and to the complexity of the identification process. The aim of the study is to propose a feature extraction based recognition system for Myanmar's national flag. Image features are acquired from the region and state of flags which are identified by using principal component analysis PCA . PCA is a statistical approach used for reducing the number of features in National flags recognition system. Soe Moe Myint | Moe Moe Myint | Aye Aye Cho "National Flags Recognition Based on Principal Component Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26775.pdfPaper URL: https://www.ijtsrd.com/other-scientific-research-area/other/26775/national-flags-recognition-based-on-principal-component-analysis/soe-moe-myint
Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based M...ijma
Β
In the process of content-based multimedia retrieval, multimedia information is processed in order to
obtain descriptive features. Descriptive representation of features, results in a huge feature count, which
results in processing overhead. To reduce this descriptive feature overhead, various approaches have been
used to dimensional reduction, among which PCA and LDA are the most used methods. However, these
methods do not reflect the significance of feature content in terms of inter-relation among all dataset
features. To achieve a dimension reduction based on histogram transformation, features with low
significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to
achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark
dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in
comparison to the conventional system.
On comprehensive analysis of learning algorithms on pedestrian detection usin...UniversitasGadjahMada
Β
Despite the surge of deep learning, deploying the deep learning-based pedestrian detection into the real system faces hurdles, mainly due to the huge resource usages. The classical feature-based detection system still becomes feasible option. There have been many efforts to improve the performance of pedestrian detection system. Among many feature set, Histogram of Oriented Gradient seems to be very effective for person detection. In this research, various machine learning algorithms are investigated for person detection. Different machine learning algorithms are evaluated to obtain the optimal accuracy and speed of the system.
In different complicated crime investigation and solving the cases the forensic science is a key contributor. That involves the biology, chemistry, mathematics and physics for investigating the facts about the crime. Forensic science includes on more technique for crime scene analysis is known as bloodstain analysis. The bloodstain analysis enable the investigators to understand the crime scene, discover the kind of weapon used, speed of weapon, flight time of blood drops and the angle of weapon or impact. In this work the aim is to investigate about the blood strain analysis technique and to design a new technique for recovering the flight time, angle of impact and velocity of weapon. In this context first the crime scene image data is accepted as initial input. Additionally for finding the accurate analysis some additional inputs are also included such as high of impact, distance of blood drops and the line passing through the ellipse formed in the input image. Finally the velocity and flight time of blood drops are calculated using the law of motion and the trajectory analysis. The implementation of the proposed technique is provided using JAVA technology. Additionally the performance is computed in terms of time and space complexity of the implemented system. The performance of the proposed system demonstrates the proposed technique is effective and low resource consuming technique.
Nonlinear prediction of human resting state functional connectivity based on ...eSAT Journals
Β
Abstract
Mounting evidence demonstrated that neuronal activity derived from functional magnetic resonance imaging (fMRI) relates to the
underlying anatomical circuitry measured by diffusion tensor/spectrum imaging (DTI/DSI). However, exploring the relationship
between functional connectivity (FC) and structural connectivity (SC) remains challengeable and thus has motivated a number of
computational models to investigate the extent to which the dynamics depend on the topology. Nevertheless, most of the models
are complex and difficult to treat analytically. In this paper, for simplicity, we utilize four network communication measures
extracted from SC as well as polynomial curves fitting method to predict FC. Our results indicate that all of these measures
predict FC via the nonlinear fitting method. Besides, compared with the linear method, the fitting value between predicted FC and
empirical FC attains higher after applying nonlinear process on communication measures which may help to shed light on the
function-structure relationship.
Key Words: brain connectivity; fMRI; DTI/DSI; network communication measure; nonlinear fitting
License plate recognition system is one of the core technologies in intelligent traffic control. In this paper, a new and tunable algorithm which can detect multiple license plates in high resolution applications is proposed. The algorithm aims at investigation into and identification of the novel Iranian and some European countries plate, characterized by both inclusion of blue area on it and its geometric shape. Obviously, the suggested algorithm contains suitable velocity due to not making use of heavy pre-processing operation such as image-improving filters, edge-detection operation and omission of noise at the beginning stages. So, the recommended method of ours is compatible with model-adaptation, i.e., the very blue section of the plate so that the present method indicated the fact that if several plates are included in the image, the method can successfully manage to detect it. We evaluated our method on the two Persian single vehicle license plate data set that we obtained 99.33, 99% correct recognition rate respectively. Further we tested our algorithm on the Persian multiple vehicle license plate data set and we achieved 98% accuracy rate. Also we obtained approximately 99% accuracy in character recognition stage.
Trajectory Segmentation and Sampling of Moving Objects Based On Representativ...ijsrd.com
Β
Moving Object Databases (MOD), although ubiquitous, still call for methods that will be able to understand, search, analyze, and browse their spatiotemporal content. In this paper, we propose a method for trajectory segmentation and sampling based on the representativeness of the (sub) trajectories in the MOD. In order to find the most representative sub trajectories, the following methodology is proposed. First, a novel global voting algorithm is performed, based on local density and trajectory similarity information. This method is applied for each segment of the trajectory, forming a local trajectory descriptor that represents line segment representativeness. The sequence of this descriptor over a trajectory gives the voting signal of the trajectory, where high values correspond to the most representative parts. Then, a novel segmentation algorithm is applied on this signal that automatically estimates the number of partitions and the partition borders, identifying homogenous partitions concerning their representativeness. Finally, a sampling method over the resulting segments yields the most representative sub trajectories in the MOD. Our experimental results in synthetic and real MOD verify the effectiveness of the proposed scheme, also in comparison with other sampling techniques.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Β
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
A Method for Sudanese Vehicle License Plates Detection and ExtractionEditor IJCATR
Β
License Plate Detection and Extraction is an important phase of Vehicle License Plate Recognition systems, which has been
an active research topic in the computer vision domain in order to identify vehicles by their license plates without direct human
intervention. This paper presents a simple, fast and automatic License Plate Detection method for the current shape of Sudanese license
plate. The proposed method involves several steps: green channel extraction, edge detection, regions of interest selection, dilation
operation with especial structural element and connected component analysis. In order to analyze the performance and efficiency of the
proposed method a data set for Sudanese vehicles has been created. Using this new data set, number of experiments has been carried
out. Comparing with other countries license plate detection the achieved results is satisfactory.
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsIJORCS
Β
Control of traffic lights at the intersections of the main issues is the optimal traffic. Intersections to regulate traffic flow of vehicles and eliminate conflicting traffic flows are used. Modeling and simulation of traffic are widely used in industry. In fact, the modeling and simulation of an industrial system is studied before creating economically and when it is affordable. The aim of this article is a smart way to control traffic. The first stage of the project with the objective of collecting statistical data (cycle time of each of the intersection of the lights of vehicles is waiting for a red light) steps where the data collection found optimal amounts next it is. Introduced by genetic algorithm optimization of parameters is performed. GA begin with coding step as a binary variable (the range specified by the initial data set is obtained) will start with an initial population and then a new generation of genetic operators mutation and crossover and will Finally, the members of the optimal fitness values are selected as the solution set. The optimal output of Petri nets CPN TOOLS modeling and software have been implemented. The results indicate that the performance improvement project in intersections traffic control systems. It is known that other data collected and enforced intersections of evolutionary methods such as genetic algorithms to reduce the waiting time for traffic lights behind the red lights and to determine the appropriate cycle.
Enhancement and Segmentation of Historical Recordscsandit
Β
Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded
historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest
of enhancement of historical documents is to remove undesirable statistics that appear in the
background and highlight the foreground, so as to enable automatic recognition of documents
with high accuracy. This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
Pre-processing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median,
Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by
binarization of the enhanced image to highlight the foreground information, using Otsu
thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and
WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of
OCR. The system showed good results when tested on the nearly 150 samples of varying
degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size
for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter.
The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively
Object extraction using edge, motion and saliency information from videoseSAT Journals
Β
Abstract Object detection is a process of finding the instances of object of a certain class which is useful in analysis of video or image. There are number of algorithms have been developed so far for object detection. Object detection has got significant role in variety of areas of computer vision like video surveillance, image retrieval`. In this paper presented an efficient algorithm for moving object extraction using edge, motion and saliency information from videos. Out methodology includes 4 stages: Frame generation, Pre-processing, Foreground generation and integration of cues. Foreground generation includes edge detection using sobel edge detection algorithm, motion detection using pixel-based absolute difference algorithm and motion saliency detection. Conditional Random Field (CRF) is applied for integration of cues and thus we get better spatial information of segmented object. Keywords: Object detection, Saliency information, Sobel edge detection, CRF.
Anomaly Detection using multidimensional reduction Principal Component AnalysisIOSR Journals
Β
Anomaly detection has been an important research topic in data mining and machine learning. Many
real-world applications such as intrusion or credit card fraud detection require an effective and efficient
framework to identify deviated data instances. However, most anomaly detection methods are typically
implemented in batch mode, and thus cannot be easily extended to large-scale problems without sacrificing
computation and memory requirements. In this paper, we propose multidimensional reduction principal
component analysis (MdrPCA) algorithm to address this problem, and we aim at detecting the presence of
outliers from a large amount of data via an online updating technique. Unlike prior principal component
analysis (PCA)-based approaches, we do not store the entire data matrix or covariance matrix, and thus our
approach is especially of interest in online or large-scale problems. By using multidimensional reduction PCA
the target instance and extracting the principal direction of the data, the proposed MdrPCA allows us to
determine the anomaly of the target instance according to the variation of the resulting dominant eigenvector.
Since our MdrPCA need not perform eigen analysis explicitly, the proposed framework is favored for online
applications which have computation or memory limitations. Compared with the well-known power method for
PCA and other popular anomaly detection algorithms
Application Of Extreme Value Theory To Bursts PredictionCSCJournals
Β
Bursts and extreme events in quantities such as connection durations, file sizes, throughput, etc. may produce undesirable consequences in computer networks. Deterioration in the quality of service is a major consequence. Predicting these extreme events and burst is important. It helps in reserving the right resources for a better quality of service. We applied Extreme value theory (EVT) to predict bursts in network traffic. We took a deeper look into the application of EVT by using EVT based Exploratory Data Analysis. We found that traffic is naturally divided into two categories, Internal and external traffic. The internal traffic follows generalized extreme value (GEV) model with a negative shape parameter, which is also the same as Weibull distribution. The external traffic follows a GEV with positive shape parameter, which is Frechet distribution. These findings are of great value to the quality of service in data networks, especially when included in service level agreement as traffic descriptor parameters.
Detection of Outliers in Large Dataset using Distributed ApproachEditor IJMTER
Β
In this paper, a distributed method is introduced for detecting distance-based outliers in very large
data sets. The approach is based on the concept of outlier detection solving set, which is a small subset of the data
set that can be also employed for predicting novel outliers. The method exploits parallel computation in order to
obtain vast time savings. Indeed, beyond preserving the correctness of the result, the proposed schema exhibits
excellent performances. From the theoretical point of view, for common settings, the temporal cost of our
algorithm is expected to be at least three orders of magnitude faster than the classical nested-loop like approach to
detect outliers. Experimental results show that the algorithm is efficient and that itβs running time scales quite well
for an increasing number of nodes. We discuss also a variant of the basic strategy which reduces the amount of
data to be transferred in order to improve both the communication cost and the overall runtime. Importantly, the
solving set computed in a distributed environment has the same quality as that produced by the corresponding
centralized method.
On comprehensive analysis of learning algorithms on pedestrian detection usin...UniversitasGadjahMada
Β
Despite the surge of deep learning, deploying the deep learning-based pedestrian detection into the real system faces hurdles, mainly due to the huge resource usages. The classical feature-based detection system still becomes feasible option. There have been many efforts to improve the performance of pedestrian detection system. Among many feature set, Histogram of Oriented Gradient seems to be very effective for person detection. In this research, various machine learning algorithms are investigated for person detection. Different machine learning algorithms are evaluated to obtain the optimal accuracy and speed of the system.
In different complicated crime investigation and solving the cases the forensic science is a key contributor. That involves the biology, chemistry, mathematics and physics for investigating the facts about the crime. Forensic science includes on more technique for crime scene analysis is known as bloodstain analysis. The bloodstain analysis enable the investigators to understand the crime scene, discover the kind of weapon used, speed of weapon, flight time of blood drops and the angle of weapon or impact. In this work the aim is to investigate about the blood strain analysis technique and to design a new technique for recovering the flight time, angle of impact and velocity of weapon. In this context first the crime scene image data is accepted as initial input. Additionally for finding the accurate analysis some additional inputs are also included such as high of impact, distance of blood drops and the line passing through the ellipse formed in the input image. Finally the velocity and flight time of blood drops are calculated using the law of motion and the trajectory analysis. The implementation of the proposed technique is provided using JAVA technology. Additionally the performance is computed in terms of time and space complexity of the implemented system. The performance of the proposed system demonstrates the proposed technique is effective and low resource consuming technique.
Nonlinear prediction of human resting state functional connectivity based on ...eSAT Journals
Β
Abstract
Mounting evidence demonstrated that neuronal activity derived from functional magnetic resonance imaging (fMRI) relates to the
underlying anatomical circuitry measured by diffusion tensor/spectrum imaging (DTI/DSI). However, exploring the relationship
between functional connectivity (FC) and structural connectivity (SC) remains challengeable and thus has motivated a number of
computational models to investigate the extent to which the dynamics depend on the topology. Nevertheless, most of the models
are complex and difficult to treat analytically. In this paper, for simplicity, we utilize four network communication measures
extracted from SC as well as polynomial curves fitting method to predict FC. Our results indicate that all of these measures
predict FC via the nonlinear fitting method. Besides, compared with the linear method, the fitting value between predicted FC and
empirical FC attains higher after applying nonlinear process on communication measures which may help to shed light on the
function-structure relationship.
Key Words: brain connectivity; fMRI; DTI/DSI; network communication measure; nonlinear fitting
License plate recognition system is one of the core technologies in intelligent traffic control. In this paper, a new and tunable algorithm which can detect multiple license plates in high resolution applications is proposed. The algorithm aims at investigation into and identification of the novel Iranian and some European countries plate, characterized by both inclusion of blue area on it and its geometric shape. Obviously, the suggested algorithm contains suitable velocity due to not making use of heavy pre-processing operation such as image-improving filters, edge-detection operation and omission of noise at the beginning stages. So, the recommended method of ours is compatible with model-adaptation, i.e., the very blue section of the plate so that the present method indicated the fact that if several plates are included in the image, the method can successfully manage to detect it. We evaluated our method on the two Persian single vehicle license plate data set that we obtained 99.33, 99% correct recognition rate respectively. Further we tested our algorithm on the Persian multiple vehicle license plate data set and we achieved 98% accuracy rate. Also we obtained approximately 99% accuracy in character recognition stage.
Trajectory Segmentation and Sampling of Moving Objects Based On Representativ...ijsrd.com
Β
Moving Object Databases (MOD), although ubiquitous, still call for methods that will be able to understand, search, analyze, and browse their spatiotemporal content. In this paper, we propose a method for trajectory segmentation and sampling based on the representativeness of the (sub) trajectories in the MOD. In order to find the most representative sub trajectories, the following methodology is proposed. First, a novel global voting algorithm is performed, based on local density and trajectory similarity information. This method is applied for each segment of the trajectory, forming a local trajectory descriptor that represents line segment representativeness. The sequence of this descriptor over a trajectory gives the voting signal of the trajectory, where high values correspond to the most representative parts. Then, a novel segmentation algorithm is applied on this signal that automatically estimates the number of partitions and the partition borders, identifying homogenous partitions concerning their representativeness. Finally, a sampling method over the resulting segments yields the most representative sub trajectories in the MOD. Our experimental results in synthetic and real MOD verify the effectiveness of the proposed scheme, also in comparison with other sampling techniques.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Β
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
A Method for Sudanese Vehicle License Plates Detection and ExtractionEditor IJCATR
Β
License Plate Detection and Extraction is an important phase of Vehicle License Plate Recognition systems, which has been
an active research topic in the computer vision domain in order to identify vehicles by their license plates without direct human
intervention. This paper presents a simple, fast and automatic License Plate Detection method for the current shape of Sudanese license
plate. The proposed method involves several steps: green channel extraction, edge detection, regions of interest selection, dilation
operation with especial structural element and connected component analysis. In order to analyze the performance and efficiency of the
proposed method a data set for Sudanese vehicles has been created. Using this new data set, number of experiments has been carried
out. Comparing with other countries license plate detection the achieved results is satisfactory.
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsIJORCS
Β
Control of traffic lights at the intersections of the main issues is the optimal traffic. Intersections to regulate traffic flow of vehicles and eliminate conflicting traffic flows are used. Modeling and simulation of traffic are widely used in industry. In fact, the modeling and simulation of an industrial system is studied before creating economically and when it is affordable. The aim of this article is a smart way to control traffic. The first stage of the project with the objective of collecting statistical data (cycle time of each of the intersection of the lights of vehicles is waiting for a red light) steps where the data collection found optimal amounts next it is. Introduced by genetic algorithm optimization of parameters is performed. GA begin with coding step as a binary variable (the range specified by the initial data set is obtained) will start with an initial population and then a new generation of genetic operators mutation and crossover and will Finally, the members of the optimal fitness values are selected as the solution set. The optimal output of Petri nets CPN TOOLS modeling and software have been implemented. The results indicate that the performance improvement project in intersections traffic control systems. It is known that other data collected and enforced intersections of evolutionary methods such as genetic algorithms to reduce the waiting time for traffic lights behind the red lights and to determine the appropriate cycle.
Enhancement and Segmentation of Historical Recordscsandit
Β
Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded
historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest
of enhancement of historical documents is to remove undesirable statistics that appear in the
background and highlight the foreground, so as to enable automatic recognition of documents
with high accuracy. This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
Pre-processing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median,
Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by
binarization of the enhanced image to highlight the foreground information, using Otsu
thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and
WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of
OCR. The system showed good results when tested on the nearly 150 samples of varying
degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size
for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter.
The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively
Object extraction using edge, motion and saliency information from videoseSAT Journals
Β
Abstract Object detection is a process of finding the instances of object of a certain class which is useful in analysis of video or image. There are number of algorithms have been developed so far for object detection. Object detection has got significant role in variety of areas of computer vision like video surveillance, image retrieval`. In this paper presented an efficient algorithm for moving object extraction using edge, motion and saliency information from videos. Out methodology includes 4 stages: Frame generation, Pre-processing, Foreground generation and integration of cues. Foreground generation includes edge detection using sobel edge detection algorithm, motion detection using pixel-based absolute difference algorithm and motion saliency detection. Conditional Random Field (CRF) is applied for integration of cues and thus we get better spatial information of segmented object. Keywords: Object detection, Saliency information, Sobel edge detection, CRF.
Anomaly Detection using multidimensional reduction Principal Component AnalysisIOSR Journals
Β
Anomaly detection has been an important research topic in data mining and machine learning. Many
real-world applications such as intrusion or credit card fraud detection require an effective and efficient
framework to identify deviated data instances. However, most anomaly detection methods are typically
implemented in batch mode, and thus cannot be easily extended to large-scale problems without sacrificing
computation and memory requirements. In this paper, we propose multidimensional reduction principal
component analysis (MdrPCA) algorithm to address this problem, and we aim at detecting the presence of
outliers from a large amount of data via an online updating technique. Unlike prior principal component
analysis (PCA)-based approaches, we do not store the entire data matrix or covariance matrix, and thus our
approach is especially of interest in online or large-scale problems. By using multidimensional reduction PCA
the target instance and extracting the principal direction of the data, the proposed MdrPCA allows us to
determine the anomaly of the target instance according to the variation of the resulting dominant eigenvector.
Since our MdrPCA need not perform eigen analysis explicitly, the proposed framework is favored for online
applications which have computation or memory limitations. Compared with the well-known power method for
PCA and other popular anomaly detection algorithms
Application Of Extreme Value Theory To Bursts PredictionCSCJournals
Β
Bursts and extreme events in quantities such as connection durations, file sizes, throughput, etc. may produce undesirable consequences in computer networks. Deterioration in the quality of service is a major consequence. Predicting these extreme events and burst is important. It helps in reserving the right resources for a better quality of service. We applied Extreme value theory (EVT) to predict bursts in network traffic. We took a deeper look into the application of EVT by using EVT based Exploratory Data Analysis. We found that traffic is naturally divided into two categories, Internal and external traffic. The internal traffic follows generalized extreme value (GEV) model with a negative shape parameter, which is also the same as Weibull distribution. The external traffic follows a GEV with positive shape parameter, which is Frechet distribution. These findings are of great value to the quality of service in data networks, especially when included in service level agreement as traffic descriptor parameters.
Detection of Outliers in Large Dataset using Distributed ApproachEditor IJMTER
Β
In this paper, a distributed method is introduced for detecting distance-based outliers in very large
data sets. The approach is based on the concept of outlier detection solving set, which is a small subset of the data
set that can be also employed for predicting novel outliers. The method exploits parallel computation in order to
obtain vast time savings. Indeed, beyond preserving the correctness of the result, the proposed schema exhibits
excellent performances. From the theoretical point of view, for common settings, the temporal cost of our
algorithm is expected to be at least three orders of magnitude faster than the classical nested-loop like approach to
detect outliers. Experimental results show that the algorithm is efficient and that itβs running time scales quite well
for an increasing number of nodes. We discuss also a variant of the basic strategy which reduces the amount of
data to be transferred in order to improve both the communication cost and the overall runtime. Importantly, the
solving set computed in a distributed environment has the same quality as that produced by the corresponding
centralized method.
TECHNICAL REVIEW: PERFORMANCE OF EXISTING IMPUTATION METHODS FOR MISSING DATA...IJDKP
Β
Incomplete data is present in many study contents. This incomplete or uncollected data information is named as missing data (values), and considered as vital problem for various researchers. Even this missing data problem is faced more in air pollution monitoring stations, where data is collected from multiple monitoring stations widespread across various locations. In literature, various imputation methods for missing data are proposed, however, in this research we considered only existing imputation methods for missing data and recorded their performance in ensemble creation. The five existing imputation methods for missing data deployed in this research are series mean method, mean of nearby points, median of nearby points, linear trend at a point and linear interpolation respectively. Series mean (SM) method demonstrated comparatively better to other imputation methods with least mean absolute error and better performance accuracy for SVM ensemble creation on CO data set using bagging and boosting algorithms.
TECHNICAL REVIEW: PERFORMANCE OF EXISTING IMPUTATION METHODS FOR MISSING DATA...IJDKP
Β
Incomplete data is present in many study contents. This incomplete or uncollected data information is
named as missing data (values), and considered as vital problem for various researchers. Even this missing
data problem is faced more in air pollution monitoring stations, where data is collected from multiple
monitoring stations widespread across various locations. In literature, various imputation methods for
missing data are proposed, however, in this research we considered only existing imputation methods for
missing data and recorded their performance in ensemble creation. The five existing imputation methods
for missing data deployed in this research are series mean method, mean of nearby points, median of
nearby points, linear trend at a point and linear interpolation respectively. Series mean (SM) method
demonstrated comparatively better to other imputation methods with least mean absolute error and better
performance accuracy for SVM ensemble creation on CO data set using bagging and boosting algorithms.
Identifying Fixations and Saccades in Eye-Tracking ProtocolsGiuseppe Fineschi
Β
The process of fixation identificationβseparating and labeling fixations and saccades in eye-tracking protocolsβis an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However,
algorithms for performing fixation identification are often
described informally and rarely compared in a meaningful way.
In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work
Analysis on different Data mining Techniques and algorithms used in IOTIJERA Editor
Β
In this paper, we discusses about five functionalities of data mining in IOT that affects the performance and that
are: Data anomaly detection, Data clustering, Data classification, feature selection, time series prediction. Some
important algorithm has also been reviewed here of each functionalities that show advantages and limitations as
well as some new algorithm that are in research direction. Here we had represent knowledge view of data
mining in IOT.
HOL, GDCT AND LDCT FOR PEDESTRIAN DETECTIONcsandit
Β
In this paper, we present and analyze different approaches implemented here to resolve pedestrian detection problem. Histograms of Oriented Laplacian (HOL) is a descriptor of
characteristic, it aims to highlight objects in digital images, Discrete Cosine Transform DCT with its two version global (GDCT) and local (LDCT), it changes image's pixel into frequencies coefficients and then we use them as a characteristics in the process. We implemented
independently these methods and tried to combine it and used there outputs in a classifier, the new generated classifier has proved it efficiency in certain cases. The performance of those
methods and their combination is tested on most popular Dataset in pedestrian detection, which
are INRIA and Daimler.
HOL, GDCT AND LDCT FOR PEDESTRIAN DETECTIONcscpconf
Β
In this paper, we present and analyze different approaches implemented here to resolve pedestrian detection problem. Histograms of Oriented Laplacian (HOL) is a descriptor of characteristic, it aims to highlight objects in digital images, Discrete Cosine Transform DCT
with its two version global (GDCT) and local (LDCT), it changes image's pixel into frequencies coefficients and then we use them as characteristics in the process. We implemented independently these methods and tried to combine it and used their outputs in a classifier, the newly generated classifier has proved its efficiency in certain cases. The performance of those methods and their combination is tested on most popular Dataset in pedestrian detection, which is INRIA and Daimler.
A Mixture Model of Hubness and PCA for Detection of Projected OutliersZac Darcy
Β
With the Advancement of time and technology, Outlier Mining methodologies help to sift through the large
amount of interesting data patterns and winnows the malicious data entering in any field of concern. It has
become indispensible to build not only a robust and a generalised model for anomaly detection but also to
dress the same model with extra features like utmost accuracy and precision. Although the K-means
algorithm is one of the most popular, unsupervised, unique and the easiest clustering algorithm, yet it can
be used to dovetail PCA with hubness and the robust model formed from Guassian Mixture to build a very
generalised and a robust anomaly detection system. A major loophole of the K-means algorithm is its
constant attempt to find the local minima and result in a cluster that leads to ambiguity. In this paper, an
attempt has done to combine K-means algorithm with PCA technique that results in the formation of more
closely centred clusters that work more accurately with K-means algorithm .This combination not only
provides the great boost to the detection of outliers but also enhances its accuracy and precision.
A MIXTURE MODEL OF HUBNESS AND PCA FOR DETECTION OF PROJECTED OUTLIERSZac Darcy
Β
With the Advancement of time and technology, Outlier Mining methodologies help to sift through the large
amount of interesting data patterns and winnows the malicious data entering in any field of concern. It has
become indispensible to build not only a robust and a generalised model for anomaly detection but also to
dress the same model with extra features like utmost accuracy and precision. Although the K-means
algorithm is one of the most popular, unsupervised, unique and the easiest clustering algorithm, yet it can
be used to dovetail PCA with hubness and the robust model formed from Guassian Mixture to build a very
generalised and a robust anomaly detection system. A major loophole of the K-means algorithm is its
constant attempt to find the local minima and result in a cluster that leads to ambiguity. In this paper, an
attempt has done to combine K-means algorithm with PCA technique that results in the formation of more
closely centred clusters that work more accurately with K-means algorithm
A Mixture Model of Hubness and PCA for Detection of Projected OutliersZac Darcy
Β
With the Advancement of time and technology, Outlier Mining methodologies help to sift through the large
amount of interesting data patterns and winnows the malicious data entering in any field of concern. It has
become indispensible to build not only a robust and a generalised model for anomaly detection but also to
dress the same model with extra features like utmost accuracy and precision. Although the K-means
algorithm is one of the most popular, unsupervised, unique and the easiest clustering algorithm, yet it can
be used to dovetail PCA with hubness and the robust model formed from Guassian Mixture to build a very
generalised and a robust anomaly detection system. A major loophole of the K-means algorithm is its
constant attempt to find the local minima and result in a cluster that leads to ambiguity. In this paper, an
attempt has done to combine K-means algorithm with PCA technique that results in the formation of more
closely centred clusters that work more accurately with K-means algorithm .
Online video-based abnormal detection using highly motion techniques and stat...TELKOMNIKA JOURNAL
Β
At the essence of video surveillance, there are abnormal detection approaches, which have been proven to be substantially effective in detecting abnormal incidents without prior knowledge about these incidents. Based on the state-of-the-art research, it is evident that there is a trade-off between frame processing time and detection accuracy in abnormal detection approaches. Therefore, the primary challenge is to balance this trade-off suitably by utilizing few, but very descriptive features to fulfill online performance while maintaining a high accuracy rate. In this study, we propose a new framework, which achieves the balancing between detection accuracy and video processing time by employing two efficient motion techniques, specifically, foreground and optical flow energy. Moreover, we use different statistical analysis measures of motion features to get robust inference method to distinguish abnormal behavior incident from normal ones. The performance of this framework has been extensively evaluated in terms of the detection accuracy, the area under the curve (AUC) and frame processing time. Simulation results and comparisons with ten relevant online and non-online frameworks demonstrate that our framework efficiently achieves superior performance to those frameworks, in which it presents high values for the accuracy while attaining simultaneously low values for the processing time.
Securing Cloud Computing Through IT GovernanceITIIIndustries
Β
Lack of alignment between information technology (IT) and the business is a problem facing many organizations. Most organizations, today, fundamentally depend on IT. When IT and the business are aligned in an organization, IT delivers what the business needs and the business is able to deliver what the market needs. IT has become a strategic function for most organizations, and it is imperative that IT and business are aligned. IT governance is one of the most powerful ways to achieve IT to business alignment. Furthermore, as the use of cloud computing for delivering IT functions becomes pervasive, organizations using cloud computing must effectively apply IT governance to it. While cloud computing presents tremendous
opportunities, it comes with risks as well. Information security
is one of the top risks in cloud computing. Thus, IT governance must be applied to cloud computing information security to help manage the risks associated with cloud computing information security. This study advances knowledge by extending IT governance to cloud computing and information security governance.
Information Technology in Industry(ITII) - November Issue 2018ITIIIndustries
Β
IT Industry publishes original research articles, review articles, and extended versions of conference papers. Articles resulting from research of both theoretical and/or practical natures performed by academics and/or industry practitioners are welcome. IT in Industry aims to become a leading IT journal with a high impact factor.
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
Β
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACMβs recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term
master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
Dimensionality Reduction and Feature Selection Methods for Script Identificat...ITIIIndustries
Β
The goal of this research is to explore effects of dimensionality reduction and feature selection on the problem of script identification from images of printed documents. The kadjacent segment is ideal for this use due to its ability to capture visual patterns. We have used principle component analysis to reduce the size of our feature matrix to a handier size that can be trained easily, and experimented by including varying combinations of dimensions of the super feature set. A modular
approach in neural network was used to classify 7 languages β Arabic, Chinese, English, Japanese, Tamil, Thai and Korean.
Image Matting via LLE/iLLE Manifold LearningITIIIndustries
Β
Accurately extracting foreground objects is the problem of isolating the foreground in images and video, called image matting which has wide applications in digital photography. This problem is severely ill-posed in the sense that, at each pixel, one must estimate the foreground and background pixels and the so-called alpha value from only pixel information. The most recent work in natural image matting rely on local smoothness assumptions about foreground and background colours on which a cost function has been established. In this paper, we propose an extension to the class of affinity based matting techniques by incorporating local manifold structural
information to produce both a smoother matte based on the socalled improved Locally Linear Embedding. We illustrate our new algorithm using the standard benchmark images and very comparable results have been obtained.
Annotating Retina Fundus Images for Teaching and Learning Diabetic Retinopath...ITIIIndustries
Β
With the improvement in IT industry, more and more application of computer software is introduced in teaching and learning. In this paper, we discuss the development process of such software. Diabetic Retinopathy is a common complication for diabetic patients. It may cause sight loss if not treated early. There are several stages of this disease. Fundus imagery is required to identify the stage and severity of the disease. Due to the lack of proper dataset of the fundus images and proper annotation, it is very difficult to perform research on this topic. Moreover, medical students are often facing difficulty with identifying the diseases in later stage of their practice as they may not have seen a sample of all of the stages of Diabetic Retinopathy problems. To mitigate the problem, we have collected fundus images from different geographic area of Bangladesh and designed an annotation software to store information about the patient, the infection level and their locations in the images. Sometimes, it is difficult to select all appropriate pixels of the infected region. To resolve the issue, we have introduced a K nearest neighbor (KNN) based technique to accurately select the region of interest (ROI). Once an expert (ophthalmologist) has annotated the images, the software can be used by the students for learning.
A Framework for Traffic Planning and Forecasting using Micro-Simulation Calib...ITIIIndustries
Β
This paper presents the application of microsimulation for traffic planning and forecasting, and proposes a new framework to model complex traffic conditions by calibrating and adjusting traffic parameters of a microsimulation model. By using an open source micro-simulator package, TRANSIMS, in this study, animated and numerical results were produced and analysed. The framework of traffic model calibration was evaluated for its usefulness and practicality. Finally, we discuss future applications such as providing end users with real time traffic information through Intelligent Transport System (ITS) integration.
Investigating Tertiary Studentsβ Perceptions on Internet SecurityITIIIndustries
Β
Internet security threats have grown from just simple viruses to various forms of computer hacking, scams, impersonation, cyber bullying, and spyware. The Internet has great influence on most people. It has profound influence and one can spend endless hours on internet activities. In particular, youth engage in more online activities than any other age group. Excessive internet usage is an emerging threat that has negative impacts on these youth; hence it is vital to investigate youths' online behavior. This work studies tertiary studentsβ risk awareness, and provides some findings that allow us to understand their knowledge on risks and their behavior towards online activities. It reveals several important online issues amongst tertiary students; Firstly, the lack of online security awareness; second, a lack of awareness and information about the dangers of rootkits, internet cookies and spyware; thirdly, female students are more unflinching than male students when commenting on social networking sites; fourthly, students are cautious only when obvious security warnings are present; and finally, their usage of internet hotspots is common without fully understanding its associated danger. These findings enable us to recommend types of internet security habits and safety practices that students should adopt in future when they are exposed to online activities. A more holistic approach was considered which aims to minimize any future risks and dangers with online activities involving students.
Blind Image Watermarking Based on Chaotic MapsITIIIndustries
Β
Security of a watermark refers to its resistance to unauthorized detecting and decoding, while watermark robustness refers to the watermarkβs resistance against common processing. Many watermarking schemes emphasize robustness more than security. However, a robust watermark is not enough to accomplish protection because the range of hostile attacks is not limited to common processing and distortions. In this paper, we give consideration to watermark security. To achieve this, we employ chaotic maps due to their extreme sensitivity to the initial values. If one fails to provide these values, the watermark will be wrongly extracted. While the chaotic maps provide perfect watermarking security, the proposed scheme is also intended to achieve robustness.
Programmatic detection of spatial behaviour in an agent-based modelITIIIndustries
Β
The automated detection of aspects of spatial behaviour in an agent-based model is necessary for model testing and analysis. In this paper we compare four predictors of herding behaviour in a model of a grazing herbivore. We find that a) the mean number of neighbours adjusted to account for population variation and b) the mean Hamming distance between rows of the two-dimensional environment can be used to detect herding. Visual inspection of the model behaviour revealed that herding occurs when the herbivore mobility reaches a threshold level. Using this threshold we identify a limits for these predictors to use in the program code. These results apply only to one set of parameters and environment size; future research will involve a wider parameter space.
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
Β
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACMβs recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
A Smart Fuzzing Approach for Integer Overflow DetectionITIIIndustries
Β
Fuzzing is one of the most commonly used methods to detect software vulnerabilities, a major cause of information security incidents. Although it has advantages of simple design and low error report, its efficiency is usually poor. In this paper we present a smart fuzzing approach for integer overflow detection and a tool, SwordFuzzer, which implements this approach. Unlike standard fuzzing techniques, which randomly change parts of the input file with no information about the underlying syntactic structure of the file, SwordFuzzer uses online dynamic taint analysis to identify which bytes in the input file are used in security sensitive operations and then focuses on mutating such bytes. Thus, the generated inputs are more likely to trigger potential vulnerabilities. We evaluated SwordFuzzer with an example program and a number of real-world applications. The experimental results show that SwordFuzzer can accurately locate the key bytes of the input file and dramatically improve the effectiveness of fuzzing in detecting real-world vulnerabilities
The banking experience for many people today is fundamentally an application of technology to be able to carry out their financial tasks. While the need to visit a bank branch remains essential for a number of activities, increasingly the need to support mobile usage is becoming the central focus of many bank strategies. The core banking systems that process financial transactions must remain highly available and able to support large volumes of activity. These systems represent a long term investment for banks and when the need arises to modernize these large systems, the transformation initiative is often very expensive and of high risk. We present in this paper our experiences in bank modernization and transformation, and outline the strategies for rolling out these large programs. As banking institutions embark upon transformation programs to upgrade their banking channels and core banking systems, it is hoped that the insights presented here are useful as a framework to support these initiatives.
Detecting Fraud Using Transaction Frequency DataITIIIndustries
Β
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. In this paper, we present a fraud detection method which detects irregular frequency of transaction usage in an Enterprise Resource Planning (ERP) system. We discuss the design, development and empirical evaluation of outlier detection and distance measuring techniques to detect frequency-based anomalies within an individual userβs profile, relative to other similar users. Primarily, we propose three automated techniques: a univariate method, called Boxplot which is based on the sampleβs median; and two multivariate methods which use Euclidean distance, for detecting transaction frequency anomalies within each transaction profile. The two multivariate approaches detect potentially fraudulent activities by identifying: (1) users where the Euclidean distance between their transaction-type set is above a certain threshold and (2) users/data points that lie far apart from other users/clusters or represent a small cluster size, using k-means clustering. The proposed methodology allows an auditor to investigate the transaction frequency anomalies and adjust the different parameters, such as the outlier threshold and the Euclidean distance threshold values to tune the number of alerts. The novelty of the proposed technique lies in its ability to automatically trigger alerts from transaction profiles, based on transaction usage performed over a period of time. Experiments were conducted using a real dataset obtained from the production client of a large organization using SAP R/3 (presently the most predominant ERP system), to run its business. The results of this empirical research demonstrate the effectiveness of the proposed approach.
Mapping the Cybernetic Principles of Viable System Model to Enterprise Servic...ITIIIndustries
Β
This paper describes the results of a theoretical mapping of the cybernetic principles of the Viable System Model (VSM) to an Enterprise Service Bus (ESB) model, with the aim to identify the management principles for the integration of services at all levels in the enterprise. This enrichment directly contributes to the viability of service-oriented systems and the justification of Business/IT alignment within enterprise. The model was identified to be suitable for further adaption in the industrial setting planned within Australian governmental departments.
Using Watershed Transform for Vision-based Two-Hand Occlusion in an Interacti...ITIIIndustries
Β
To achieve a natural interaction in augmented reality environment, we have suggested to use markerless visionbased two-handed gestures for the interaction; with an outstretched hand and a pointing hand used as virtual object registration plane and pointing device respectively. However, twohanded interaction always causes mutual occlusion which jeopardizes the hand gesture recognition. In this paper, we present a solution for two-hand occlusion by using watershed transform. The main idea is to start from a two-hand occlusion image in binary format, then form a grey-scale image based on the distance of each non-object pixel to object pixel. The watershed algorithm is applied to the negation of the grey scaled image to form watershed lines which separate the two hands. Fingertips are then identified and each hand is recognized based on the number of fingertips on each hand. The outstretched hand is assumed to contain 5 fingertips and the pointing device contains less than 5 fingertips. An example of applying our result in hand and virtual object interaction is displayed at the end of the paper.
Speech Feature Extraction and Data VisualisationITIIIndustries
Β
βThis paper presents a signal processing approach to analyse and identify accent discriminative features of four groups of English as a second language (ESL) speakers, including Chinese, Indian, Japanese, and Korean. The features used for speech recognition include pitch, stress, formant frequencies, the Mel frequency coefficient, log frequency coefficient, and the intensity and duration of vowels spoken. This paper presents our study using the Matlab Speech Analysis Toolbox, and highlights how data processing can be automated and results visualised. The proposed algorithm achieved an average success rate of 57.3% in identifying vowels spoken in a speech by the four nonnative English speaker groups.
Bayesian-Network-Based Algorithm Selection with High Level Representation Fee...ITIIIndustries
Β
A real-world intelligent system consists of three basic modules: environment recognition, prediction (or estimation), and behavior planning. To obtain high quality results in these modules, high speed processing and real time adaptability on a case by case basis are required. In the environment recognition module many different algorithms and algorithm networks exist with varying performance. Thus, a mechanism that selects the best possible algorithm is required. To solve this problem we are using an algorithm selection approach to the problem of natural image understanding. This selection mechanism is based on machine learning; a bottom-up algorithm selection from real-world image features and a top-down algorithm selection using information obtained from a high level symbolic world description and algorithm suitability. The algorithm selection method iterates for each input image until the high-level description cannot be improved anymore. In this paper we present a method of iterative composition of the high level description. This step by step approach allows us to select the best result for each region of the image by evaluating all the intermediary representations and finally keep only the best one.
Instance Selection and Optimization of Neural NetworksITIIIndustries
Β
Credit scoring is an important tool in financial institutions, which can be used in credit granting decision. Credit applications are marked by credit scoring models and those with high marks will be treated as βgoodβ, while those with low marks will be regarded as βbadβ. As data mining technique develops, automatic credit scoring systems are warmly welcomed for their high efficiency and objective judgments. Many machine learning algorithms have been applied in training credit scoring models, and ANN is one of them with good performance. This paper presents a higher accuracy credit scoring model based on MLP neural networks trained with back propagation algorithm. Our work focuses on enhancing credit scoring models in three aspects: optimize data distribution in datasets using a new method called Average Random Choosing; compare effects of training-validation-test instances numbers; and find the most suitable number of hidden units. Another contribution of this paper is summarizing the tendency of scoring accuracy of models when the number of hidden units increases. The experiment results show that our methods can achieve high credit scoring accuracy with imbalanced datasets. Thus, credit granting decision can be made by data mining methods using MLP neural networks.
Signature Forgery and the Forger β An Assessment of Influence on Handwritten ...ITIIIndustries
Β
Signatures are widely used as a form of personal authentication. Despite ubiquity in deployment, individual signatures are relatively easy to forge, especially when only the static βpictorialβ outcome of the signature is considered at verification time. In this study, we explore opinions on signature usage for verification purposes, and how individuals rate a particular third-party signature in terms of ease of forgeability and their own ability to forge. We examine responses with respect to an individualβs experience of the forgeability/complexity of their own signature. Our study shows that past experience does not generally have an effect on perceived signature complexity nor the perceived effectiveness of an individual to themselves forge a signature. In assessing forgeability, most subjects cite the overall signature complexity and distinguishing features in reaching this decision. Furthermore, our research indicates that individuals typically vary their signature according to the scenario but generally little effort into the production of the signature.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
Β
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
Β
As AI technology is pushing into IT I was wondering myself, as an βinfrastructure container kubernetes guyβ, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefitβs both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Β
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as βpredictable inferenceβ.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Β
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Β
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Β
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overviewβ
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Β
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But thereβs more:
In a second workflow supporting the same use case, youβll see:
Your campaign sent to target colleagues for approval
If the βApproveβ button is clicked, a Jira/Zendesk ticket is created for the marketing design team
Butβif the βRejectβ button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Β
Are you looking to streamline your workflows and boost your projectsβ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, youβre in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part βEssentials of Automationβ series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Hereβs what youβll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
Weβll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Donβt miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
Β
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more βmechanicalβ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Β
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP