IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
An Efficient Frame Embedding Using Haar Wavelet Coefficients And Orthogonal C...IJERA Editor
Digital media, applications, copyright defense, and multimedia security have become a vital aspect of our daily life. Digital watermarking is a technology used for the copyright security of digital applications. In this work we have dealt with a process able to mark digital pictures with a visible and semi invisible hided information, called watermark. This process may be the basis of a complete copyright protection system. Watermarking is implemented here using Haar Wavelet Coefficients and Principal Component analysis. Experimental results show high imperceptibility where there is no noticeable difference between the watermarked video frames and the original frame in case of invisible watermarking, vice-versa for semi visible implementation. The watermark is embedded in lower frequency band of Wavelet Transformed cover image. The combination of the two transform algorithm has been found to improve performance of the watermark algorithm. The robustness of the watermarking scheme is analyzed by means of two distinct performance measures viz. Peak Signal to Noise Ratio (PSNR) and Normalized Coefficient (NC).
Robust Watermarking through Dual Band IWT and Chinese Remainder TheoremjournalBEEI
CRT was a widely used algorithm in the development of watermarking methods. The algorithm produced good image quality but it had low robustness against compression and filtering. This paper proposed a new watermarking scheme through dual band IWT to improve the robustness and preserving the image quality. The high frequency sub band was used to index the embedding location on the low frequency sub band. In robustness test, the CRT method resulted average NC value of 0.7129, 0.4846, and 0.6768 while the proposed method had higher NC value of 0.7902, 0.7473, and 0.8163 in corresponding Gaussian filter, JPEG, and JPEG2000 compression test. Meanwhile the both CRT and proposed method had similar average SSIM value of 0.9979 and 0.9960 respectively in term of image quality. The result showed that the proposed method was able to improve the robustness and maintaining the image quality.
Investigative Compression Of Lossy Images By Enactment Of Lattice Vector Quan...IJERA Editor
In the digital era we live in, efficient representation of data generated by a discrete source and its reliable transmission are unquestionable need. In this work we have focused on source coding taking image as source. Lattice Vector Quantization (LVQ) can be used for source coding as well as for channel coding. (LVQ) with Generator Matrix (GM) and codebook is implemented. When implementation using codebook is done, two codebooks are constructed, one with 256 lattice points that are closest to (0,0,0,0) and another with 256 lattice points that are closest to (1,0,0,0). Energy for both the codes is calculated. When we compare the energy of both the codes we find that codes centered at a non lattice point is lower energy code.
An Efficient Frame Embedding Using Haar Wavelet Coefficients And Orthogonal C...IJERA Editor
Digital media, applications, copyright defense, and multimedia security have become a vital aspect of our daily life. Digital watermarking is a technology used for the copyright security of digital applications. In this work we have dealt with a process able to mark digital pictures with a visible and semi invisible hided information, called watermark. This process may be the basis of a complete copyright protection system. Watermarking is implemented here using Haar Wavelet Coefficients and Principal Component analysis. Experimental results show high imperceptibility where there is no noticeable difference between the watermarked video frames and the original frame in case of invisible watermarking, vice-versa for semi visible implementation. The watermark is embedded in lower frequency band of Wavelet Transformed cover image. The combination of the two transform algorithm has been found to improve performance of the watermark algorithm. The robustness of the watermarking scheme is analyzed by means of two distinct performance measures viz. Peak Signal to Noise Ratio (PSNR) and Normalized Coefficient (NC).
Robust Watermarking through Dual Band IWT and Chinese Remainder TheoremjournalBEEI
CRT was a widely used algorithm in the development of watermarking methods. The algorithm produced good image quality but it had low robustness against compression and filtering. This paper proposed a new watermarking scheme through dual band IWT to improve the robustness and preserving the image quality. The high frequency sub band was used to index the embedding location on the low frequency sub band. In robustness test, the CRT method resulted average NC value of 0.7129, 0.4846, and 0.6768 while the proposed method had higher NC value of 0.7902, 0.7473, and 0.8163 in corresponding Gaussian filter, JPEG, and JPEG2000 compression test. Meanwhile the both CRT and proposed method had similar average SSIM value of 0.9979 and 0.9960 respectively in term of image quality. The result showed that the proposed method was able to improve the robustness and maintaining the image quality.
Investigative Compression Of Lossy Images By Enactment Of Lattice Vector Quan...IJERA Editor
In the digital era we live in, efficient representation of data generated by a discrete source and its reliable transmission are unquestionable need. In this work we have focused on source coding taking image as source. Lattice Vector Quantization (LVQ) can be used for source coding as well as for channel coding. (LVQ) with Generator Matrix (GM) and codebook is implemented. When implementation using codebook is done, two codebooks are constructed, one with 256 lattice points that are closest to (0,0,0,0) and another with 256 lattice points that are closest to (1,0,0,0). Energy for both the codes is calculated. When we compare the energy of both the codes we find that codes centered at a non lattice point is lower energy code.
Finding Relationships between the Our-NIR Cluster ResultsCSCJournals
The problem of evaluating node importance in clustering has been active research in present days and many methods have been developed. Most of the clustering algorithms deal with general similarity measures. However In real situation most of the cases data changes over time. But clustering this type of data not only decreases the quality of clusters but also disregards the expectation of users, when usually require recent clustering results. In this regard we proposed Our-NIR method that is better than Ming-Syan Chen proposed a method and it has proven with the help of results of node importance, which is related to calculate the node importance that is very useful in clustering of categorical data, still it has deficiency that is importance of data labeling and outlier detection. In this paper we modified Our-NIR method for evaluating of node importance by introducing the probability distribution which will be better than by comparing the results.
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGijdkp
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time
grows as the size and complexity of the training dataset increases. For this reason, it is important to have
efficient computational methods and algorithms that can be applied on large datasets, such that it is still
possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper
a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model
(HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances
with similar inputs are clustered and a weight factor which represents the frequency of these instances is
assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity
function to cluster similar examples. In the second step, all formulas in the classical HMM training
algorithm (EM) associated with the number of training instances are modified to include the weight factor
in appropriate terms. This process significantly accelerates HMM training while maintaining the same
initial, transition and emission probabilities matrixes as those obtained with the classical HMM training
algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set,
speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach
is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
K-MEDOIDS CLUSTERING USING PARTITIONING AROUND MEDOIDS FOR PERFORMING FACE R...ijscmc
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
Power quality disturbances classification using complex wavelet phasor space ...journalBEEI
Power quality disturbances (PQD) degrades the quality of power. Detection of these PQDs in real time using smart systems connected to the power grid is a challenge due to the integration of energy generation units and electronic devices. Deep learning methods have shown advantages for PQD classification accurately. PQD events are non-stationary and occur at discrete events. Pre-processing of power signal using dual tree complex wavelet transform in localizing the disturbances according to time-frequency-phase information improves classification accuracy.Phase space reconstruction of complex wavelet sub bands to 2D data and use of fully connected feed forward neural network improves classification accuracy. In this work, a combination of DTCWT-PSR and FC-FFNN is used to classify different complex PSDs accurately.The proposed algorithm is evaluated for its performance considering different network configurations and the most optimum structure is developed. The classification accuracy is demonstrated to be 99.71% for complex PQDs and is suitable for real time activity with reduced complexity.
E XTENDED F AST S EARCH C LUSTERING A LGORITHM : W IDELY D ENSITY C LUSTERS ,...csandit
CFSFDP (clustering by fast search and find of densi
ty peaks) is recently developed density-
based clustering algorithm. Compared to DBSCAN, it
needs less parameters and is
computationally cheap for its non-iteration. Alex.
at al have demonstrated its power by many
applications. However, CFSFDP performs not well whe
n there are more than one density peak
for one cluster, what we name as "no density peaks"
. In this paper, inspired by the idea of a
hierarchical clustering algorithm CHAMELEON, we pro
pose an extension of CFSFDP,
E_CFSFDP, to adapt more applications. In particular
, we take use of original CFSFDP to
generating initial clusters first, then merge the s
ub clusters in the second phase. We have
conducted the algorithm to several data sets, of wh
ich, there are "no density peaks". Experiment
results show that our approach outperforms the orig
inal one due to it breaks through the strict
claim of data sets
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETScsandit
The ability to mine and extract useful information automatically, from large datasets, is a
common concern for organizations (having large datasets), over the last few decades. Over the
internet, data is vastly increasing gradually and consequently the capacity to collect and store
very large data is significantly increasing.
Existing clustering algorithms are not always efficient and accurate in solving clustering
problems for large datasets.
However, the development of accurate and fast data classification algorithms for very large
scale datasets is still a challenge. In this paper, various algorithms and techniques especially,
approach using non-smooth optimization formulation of the clustering problem, are proposed
for solving the minimum sum-of-squares clustering problems in very large datasets. This
research also develops accurate and real time L2-DC algorithm based with the incremental
approach to solve the minimum
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Finding Relationships between the Our-NIR Cluster ResultsCSCJournals
The problem of evaluating node importance in clustering has been active research in present days and many methods have been developed. Most of the clustering algorithms deal with general similarity measures. However In real situation most of the cases data changes over time. But clustering this type of data not only decreases the quality of clusters but also disregards the expectation of users, when usually require recent clustering results. In this regard we proposed Our-NIR method that is better than Ming-Syan Chen proposed a method and it has proven with the help of results of node importance, which is related to calculate the node importance that is very useful in clustering of categorical data, still it has deficiency that is importance of data labeling and outlier detection. In this paper we modified Our-NIR method for evaluating of node importance by introducing the probability distribution which will be better than by comparing the results.
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGijdkp
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time
grows as the size and complexity of the training dataset increases. For this reason, it is important to have
efficient computational methods and algorithms that can be applied on large datasets, such that it is still
possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper
a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model
(HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances
with similar inputs are clustered and a weight factor which represents the frequency of these instances is
assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity
function to cluster similar examples. In the second step, all formulas in the classical HMM training
algorithm (EM) associated with the number of training instances are modified to include the weight factor
in appropriate terms. This process significantly accelerates HMM training while maintaining the same
initial, transition and emission probabilities matrixes as those obtained with the classical HMM training
algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set,
speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach
is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
K-MEDOIDS CLUSTERING USING PARTITIONING AROUND MEDOIDS FOR PERFORMING FACE R...ijscmc
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
Power quality disturbances classification using complex wavelet phasor space ...journalBEEI
Power quality disturbances (PQD) degrades the quality of power. Detection of these PQDs in real time using smart systems connected to the power grid is a challenge due to the integration of energy generation units and electronic devices. Deep learning methods have shown advantages for PQD classification accurately. PQD events are non-stationary and occur at discrete events. Pre-processing of power signal using dual tree complex wavelet transform in localizing the disturbances according to time-frequency-phase information improves classification accuracy.Phase space reconstruction of complex wavelet sub bands to 2D data and use of fully connected feed forward neural network improves classification accuracy. In this work, a combination of DTCWT-PSR and FC-FFNN is used to classify different complex PSDs accurately.The proposed algorithm is evaluated for its performance considering different network configurations and the most optimum structure is developed. The classification accuracy is demonstrated to be 99.71% for complex PQDs and is suitable for real time activity with reduced complexity.
E XTENDED F AST S EARCH C LUSTERING A LGORITHM : W IDELY D ENSITY C LUSTERS ,...csandit
CFSFDP (clustering by fast search and find of densi
ty peaks) is recently developed density-
based clustering algorithm. Compared to DBSCAN, it
needs less parameters and is
computationally cheap for its non-iteration. Alex.
at al have demonstrated its power by many
applications. However, CFSFDP performs not well whe
n there are more than one density peak
for one cluster, what we name as "no density peaks"
. In this paper, inspired by the idea of a
hierarchical clustering algorithm CHAMELEON, we pro
pose an extension of CFSFDP,
E_CFSFDP, to adapt more applications. In particular
, we take use of original CFSFDP to
generating initial clusters first, then merge the s
ub clusters in the second phase. We have
conducted the algorithm to several data sets, of wh
ich, there are "no density peaks". Experiment
results show that our approach outperforms the orig
inal one due to it breaks through the strict
claim of data sets
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETScsandit
The ability to mine and extract useful information automatically, from large datasets, is a
common concern for organizations (having large datasets), over the last few decades. Over the
internet, data is vastly increasing gradually and consequently the capacity to collect and store
very large data is significantly increasing.
Existing clustering algorithms are not always efficient and accurate in solving clustering
problems for large datasets.
However, the development of accurate and fast data classification algorithms for very large
scale datasets is still a challenge. In this paper, various algorithms and techniques especially,
approach using non-smooth optimization formulation of the clustering problem, are proposed
for solving the minimum sum-of-squares clustering problems in very large datasets. This
research also develops accurate and real time L2-DC algorithm based with the incremental
approach to solve the minimum
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Mer än 90 % av verksamheterna i Davidshallsområdet i Malmö, är privatägda. Inga större butiks- och restaurangkedjor finns vid torget. Alla är helt unika och tillhandahåller oftast varor handplockade för kunderna. Detta åskådliggörs med fotoutställningen ”Vi är Davidshallare” och lanseringen av områdets logotyp under Höstfesten 24/9.
Estatuto da Família, projeto de Lei do deputado federal Anderson Ferreira (PR-PE), que determina a família como sendo a "união entre homem e mulher" e impede a adoção de crianças por casais homoafetivos.
Reduct generation for the incremental data using rough set theorycsandit
n today’s changing world huge amount of data is ge
nerated and transferred frequently.
Although the data is sometimes static but most comm
only it is dynamic and transactional. New
data that is being generated is getting constantly
added to the old/existing data. To discover the
knowledge from this incremental data, one approach
is to run the algorithm repeatedly for the
modified data sets which is time consuming. The pap
er proposes a dimension reduction
algorithm that can be applied in dynamic environmen
t for generation of reduced attribute set as
dynamic reduct.
The method analyzes the new dataset, when it become
s available, and modifies
the reduct accordingly to fit the entire dataset. T
he concepts of discernibility relation, attribute
dependency and attribute significance of Rough Set
Theory are integrated for the generation of
dynamic reduct set, which not only reduces the comp
lexity but also helps to achieve higher
accuracy of the decision system. The proposed metho
d has been applied on few benchmark
dataset collected from the UCI repository and a dyn
amic reduct is computed. Experimental
result shows the efficiency of the proposed method
Parallel Implementation of K Means Clustering on CUDAprithan
K-Means clustering is a popular clustering algorithm in data mining. Clustering large data sets can be
time consuming, and in an attempt to minimize this time, our project is a parallel implementation of KMeans
clustering algorithm on CUDA using C. We present the performance analysis and implementation
of our approach to parallelizing K-Means clustering.
Increasing electrical grid stability classification performance using ensemb...IJECEIAES
The increasing demand for electricity every year makes the electricity infrastructure approach the maximum threshold value, thus affecting the stability of the electricity network. The decentralized smart grid control (DSGC) system has succeeded in maintaining the stability of the electricity network with various assumptions. The data mining approach on the DSGC system shows that the decision tree algorithm provides new knowledge, however, its performance is not yet optimal. This paper poses an ensemble bagging algorithm to reinforce the performance of decision trees C4.5 and classification and regression trees (CART). To evaluate the classification performance, 10-fold cross-validation was used on the grid data. The results showed that the ensemble bagging algorithm succeeded in increasing the performance of both methods in terms of accuracy by 5.6% for C4.5 and 5.3% for CART.
A novel technique for speech encryption based on k-means clustering and quant...journalBEEI
In information transmission such as speech information, higher security and confidentiality are specially required. Therefore, data encryption is a pre-requisite for a secure communication system to protect such information from unauthorized access. A new algorithm for speech encryption is introduced in this paper. It depends on the quantum chaotic map and k-means clustering, which are employed in keys generation. Also, two stages of scrambling were used: the first relied on bits using the proposed algorithm (binary representation scrambling BiRS) and the second relied on k-means using the proposed algorithm (block representation scrambling BlRS). The objective test used statistical analysis measures (signal-to-noise-ratio, segmental signal-to-noise-ratio, frequency-weighted signal-to-noise ratio, correlation coefficient, log-likelihood ratio) applied to evaluate the proposed system. Via MATLAB simulations, it is shown that the proposed technique is secure, reliable and efficient to be implemented in secure speech communication, as well as also being characterized by high clarity of the recovered speech signal.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
International Journal of Computational Engineering Research(IJCER) ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
FAST DETECTION OF DDOS ATTACKS USING NON-ADAPTIVE GROUP TESTINGIJNSA Journal
Network security has become more important role today to personal users and organizations. Denial-of-
Service (DoS) and Distributed Denial-of-Service (DDoS) attacks are serious problem in network. The
major challenges in design of an efficient algorithm in data stream are one-pass over the input, poly-log
space, poly-log update time and poly-log reporting time. In this paper, we use strongly explicit construction
d-disjunct matrices in Non-adaptive group testing (NAGT) to adapt these requirements and propose a
solution for fast detecting DoS and DDoS attacks based on NAGT approach
Graphical Visualization of MAC Traces for Wireless Ad-hoc Networks Simulated ...idescitation
Many network simulators (e.g., ns2) are already
being used for performing wired and wireless network
simulations. But, with the current graphical visualization
support in-built in ns2, it is difficult to understand the node
status, packet status and the MAC level events particularly
for Ad-hoc networks. In this paper, we extend the visualization
support in ns-2 that should help research community in the
area of wireless networks to analyze different MAC level
events in an efficient manner. In particular, we have developed
two types of visualizations namely, temporal and spatial.
Temporal visualization helps to analyze success or failure of
a packet with respect to time while spatial visualization helps
to understand the effects due to proximity of nodes. The trace
is made highly configurable in terms of different attributes
like specific nodes and time duration.
FAST DETECTION OF DDOS ATTACKS USING NON-ADAPTIVE GROUP TESTINGIJNSA Journal
Network security has become more important role today to personal users and organizations. Denial-ofService (DoS) and Distributed Denial-of-Service (DDoS) attacks are serious problem in network. The major challenges in design of an efficient algorithm in data stream are one-pass over the input, poly-log space, poly-log update time and poly-log reporting time. In this paper, we use strongly explicit construction d-disjunct matrices in Non-adaptive group testing (NAGT) to adapt these requirements and propose a solution for fast detecting DoS and DDoS attacks based on NAGT approach.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
T24144148
1. Monali Patil, Vidya Chitre, Dipti Patil / International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 4, July-August 2012, pp.144-148
Improved K-mean clustering with Mobile Agent
Monali Patil, Vidya Chitre, Dipti Patil
Department of Information Technology, K. C. COE, Thane
Department of Computer Engineering, Bharati Vidyapidth, Navi Mumbai
Department of Information Technology, PIIT, New Panvel
Mumbai University
Maharashtra
India
Abstract— The data stream model has recently A good clustering algorithm should be able to identity
attracted attention for its applicability to numerous types clusters irrespective of their shapes. D-Stream, is a
of data, including telephone records, Web documents, framework for clustering stream data using a density-based
and click streams. For analysis of such data, the ability approach. The algorithm uses an online component which
to process the data in a single pass, or a small number of maps each input data record into a grid and an offline
passes, while using little memory, is crucial. D-Stream component which computes the grid density and clusters the
algorithm is an extended grid-based clustering algorithm grids based on the density. The algorithm adopts a density
for different dimensional data streams. One of the decaying technique to
disadvantages of D-Stream algorithm is the large capture the dynamic changes of a data stream [4]. It exploits
number of grids, especially for high-dimensional data. So the intricate relationships between the decay factor, data
we use the mobile agent to deal with the different density and cluster structure. An extended DBSCAN
dimensional data. This method can reduce the number of algorithm was proposed by Cao F et.al called DenStream
the grids and the number of the update times. So using algorithm in 2006 , which also used two-phase scheme the
mobile agent technique with data stream algorithm we online component and the offline component. D-Stream
make the data stream clustering more simplified and algorithm was an extended Grid-based method proposed by
flexible. Chen et.al in 2007, which also used two-phase scheme and
emphasized the detection of the isolated points.
Keywords— D-Stream, Grid based clustering
I. INTRODUCTION II. THE CONCEPTS AND DEFINITIONS
Since 1991 Weiser proposed pervasive computing , Let A = (A1,A2, ..., Ad) be a bounded, all ordered
pervasive computing has been paid more attention to. domain set, then we will denote a d-dimensional numerical
Pervasive application emphasizes on "real-time" reactions space by S = A1 × A2 × ... × Ad, and one dimension of S be
because of the special requirement of the environment. With represented Ad . Data stream is defined as a series of
the development of the distributed system and wireless chronological order to reach set of data points, it will be
communication, streams of data are able to be obtained from denoted by DS = (X1, X2 ,…, Xn, ...). It is assumed that the
high-bandwidth and data streams mining has become one of data stream consists of a set of multi-dimensional records
the hot points in the pervasive application. Clustering can X1,X2,…,Xn, …arriving at time stamps t1,t2, ... ,tn, ... Each
provide the real-time analyzing results in the pervasive Xi is a multi-dimensional record containing d dimensions
environment. There are many mature methods can be which are denoted by Xi = ( xi1 , xi2 ,…, xid ). the Xi =
extended to cluster the data streams. Guha et.al gave an (xi1, xi2, ..., xid) represents a data point (xi1 _ A1, xi2 _ A2,
extended k-means clustering method to deal with data ..., xid _ Ad) [4].
streams in 2000. In 2003 Babcock et.al improved Guha's
method using the data structure called exponential A. DEFINITION: 1. THE GRID
histogram. O'Challagham et.al proposed STREAM The each dimension of the numerical space S is divided
algorithm, which improved k-means using LSEARCH and evenly into ξ intervals, each interval is the dimension on a
they got the more qualitative results. CluStream algorithm right-open interval of paragraph [li ,hi], length will be
was given by Aggarwal et.al, which extended the clustering denoted by ln, where i is followed by numbered 1, 2, ..., ξ.
feature concept of BIRCH algorithm. The numerical space S is divided into ξd disjoint grid, each
Clustering is the process of grouping the data into classes grid g is a d-dimensional hypercube in the S , can be
or clusters so that objects within a cluster have high
similarity but are very dissimilar to objects in other clusters.
144 | P a g e
2. Monali Patil, Vidya Chitre, Dipti Patil / International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 4, July-August 2012, pp.144-148
expressed as ( v1 , v2 ,…, vd ) , where vi is the ith (1 ≤ i ≤ d) Dmin(t2 +1,t3);
dimensional interval on the corresponding number in grid g. (2) Dmin(t1 ,t1)=1/N;
(3) lim t->∞ (t1 ,t )= N(1-λ)
(4) Let t1<t2, then Dmin(t1 ,tc)= Dmin(t2 ,tc).
B. DEFINITION: 2 DATA DENSITY, ATTENUATION FACTOR
The data arriving at time tc be denoted by p, its density III. D-STREAM ALGORITHM
D(p,t) is a change in the amount of time t, is defined as: D-Stream is a new framework for clustering real-
D(p,t) =λt-tc , where λ (0<λ<1) is a constant, called the time stream data. There are two components: the online
attenuation factor. component and the offline component. The online
Definition: 3 Grid density component reads and maps each input data record into a
Set up the grid g at time t contains the data items are p1, grid, and the offline component computes the density of
p2,…,pN , their time stamp (referring to the arrival time), each grid and clusters the grids using a density-based
respectively, t1, t2, …, tn , thus at time t, the density of grid algorithm in every fixed interval of time (gap). Actually, in
is defined as the sum of the data density of all data items in the offline component there is the general grid-based
the grid g at the time t , we will denote the density of grid by clustering algorithm. In the online component the algorithm
D(g,t) = ∑i=1 n λ t-ti adopts a density decaying technique to capture the dynamic
changes of a data stream and exploits the intricate
Definition: 4 Dense grid relationships between the decay factor, data density and
In the t moment, if D(g,t)≥ c/N(1-λ)= Dm grid g , then cluster structure.
claimed the grid as a dense grid , where C is C ≥ 1,and is Once the densities of grids are updated in the gap, the
a clustering procedure is similar to the general grid-based
constant (in this algorithm is set up C = 3). clustering. This algorithm can find arbitrary shape clusters
by a systematic visit of the neighbors and it is not necessary
Definition 5. Eigenvector of grid to specify the number of cluster like k-means algorithm.
The vector is defined as a six-tuple ( Fx ,Qx , P, density However, there is a higher requirement that if the dimension
,class , tg ), Where: wherein Fx and Qx each correspond to a is high, the number of the grids must be very large.
vector of d entries. The definition of each of these entries is For a data stream, at each time step, the online
as follows: component of D-Stream continuously reads a new data
For each dimension, the sum of the squares of the data record, place the multi-dimensional data into a
values is maintained in Qx. Thus, Qx contains d values. The corresponding discretized density grid in the multi-
p-th entry of Qx is equal to dimensional space, and update the characteristic vector of
∑ j€grid (xij p )2 the density grid (Lines 5-8 in figure 1). The offline
g component dynamically adjusts the clusters every gap time
For each dimension, the sum of the data values is steps, where gap is an integer parameter. After the first gap,
maintained in Fx. Thus, Fx contains d values. The p-th entry the algorithm generates the initial cluster (Lines 9-11). Then,
of Fx is equal to the algorithm periodically removes sporadic grids and
∑ j€grid (xij p ) regulates the clusters (Lines 12-15).
g
Where P is a d-dimensional vector and the grid 1.procedure D-Stream
coordinates, it will be denoted by the corresponding 2. tc = 0;
dimension's interval numbers in grid. Where density is 3. initialize an empty hash table grid list;
density values of the grid at time tg , class is the grid cluster 4. while data stream is active do
labeling belongs to cluster ,tg is a time grid data points 5. read record x = (x1, x2, · · · , xd);
recently. 6. determine the density grid g that contains x;
7. if(g not in grid list) insert g to grid list;
Definition 6. Density threshold function 8. update the characteristic vector of g;
9. if tc == gap then
Let tc be the current time, and tg be the time of the last 10. call initial clustering(grid list);
increased data in grid g. Then the density threshold function 11. end if
is defined as: 12. if tc mod gap == 0 then
D min (tg, tc) = (1-λ tc-tg+1)/N(1-λ) 13. detect and remove sporadic grids from grid list;
14. call adjust clustering(grid list);
where N is the total number of the grid space. 15. end if
Its property is as follows: 16. tc = tc + 1;
(1) If t1≤t2≤t3 then Dmin(t1 ,t3)= Dmin(t1 ,t2) λt3-t2+ 17. end while
145 | P a g e
3. Monali Patil, Vidya Chitre, Dipti Patil / International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 4, July-August 2012, pp.144-148
18. end procedure. IV. MOBILE AGENT TECHNIQUES
An agent is a program, which can act independently
The algorithm adopts a density decaying technique to and autonomously. A mobile agent is an agent, which can
capture the dynamic changes of a data stream. Exploiting the move in various networks under their own control, migrating
intricate relationships between the decay factor, data density from one host to another host and interacting with other
and cluster structure, the algorithm can efficiently and agents and resources on each. When a mobile agent's task is
effectively generate and adjust the clusters in real time. We done, it can return to its home or accept other arrangement.
assume that the input data has d dimensions, and each input The structures of the mobile agent are different for the
data record is defined within the space, different system. However, there are two parts generally
S= S1*S2*……*Sd speaking, which are MA (Mobile Agent) and MAE (Mobile
where Si is the definition space for the ith dimension. In D- Agent Environment). MAE realizes the migration of the
Stream, we partition the d−dimensional space S into density mobile agents among the hosts using agent transfer protocol
grids. Suppose for each dimension, its space Si, i = 1, · · · , d and distributes the executing environment and service
is divided into pi partitions as interface to them [1]. It also controls the security,
communication, basic service and so on. MA is above the
Si= Si,1U Si,2 U…… U Si,pi, MAE and it can move into another MAE. MA can
communicate with other MA using agent communication
then the data space S is partitioned into N = πdi=1 Pi density language.
grids. For a density grid g that is composed of
S1,j1 *S2,j2 ……….. * Sd,jd , ji = 1 ……….. pi, we denote V. K-MEAN ALGORITHM
it as g = (j1,j2, …………. , jd). I/p: D= {t1,t2,t3,…….,tn} // set of
elements.
A data record x = (x1, x2, · · · , xd) can be mapped to a k // no. of desired
density grid g(x) as follows: clusters
g(x) = (j1,j2,….., jd) where xi € Si,ji o/p: K // set of clusters
For each data record x, we assign it a density coefficient 1) Assign initial values for means m1,
which decreases with as x ages. In fact, if x arrives at time tc, m2, m3, ……………, mk;
we define its time stamp T(x) = tc, and its density coefficient 2) Repeat
D(x, t) at time t is a. Assign each item ti to the
cluster which has the closest
D(x,t) = λt-T(x) = λt-tc , mean;
b. Calculate new mean for each
where λ€ (0, 1) is a constant called the decay factor. Grid cluster;
Density for a grid g, at a given time t, let E(g, t) be the set of Until convergence criteria is met;
data records that are map tog at or before time t, its density
D(g, t) is defined as the sum of the density coefficients of all VI. THE NEW FRAMEWORK BASED ON MOBILE
data records that mapped to g. Namely, the density of g at t
AGENT
is:
We propose a new framework based on the mobile
agent to gather and cluster the data. The Fig.2 shows that the
D(g,t) = ∑ x€ E(g, t) D(x,t)
structure of the framework based on mobile agent. There are
three sorts of the mobile agents: the main agent, the
The density of any grid is constantly changing.
subagent and the result agent. The main agent is in the
However, we have found that it is unnecessary to update the
online component and it gathers the data in every gap. It
density values of all data records and grids at every time step.
distributes the each dimension data to the right subagent. If
Instead, it is possible to update the density of a grid only
we have d-dimensional data, we need about d subagents.
when a new data record is mapped to that grid [3]. For each
However, during clustering the data, if the data is not
grid, the time when it receives the last data record should be
changed in a certain dimension, the main agent will
recorded so that the density of the grid can be updated
distribute another dimension data to this subagent. This
according to the following result when a new data record
method can save the space.
arrives at the grid.
146 | P a g e
4. Monali Patil, Vidya Chitre, Dipti Patil / International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 4, July-August 2012, pp.144-148
Fig. 2 Statistical Analysis of k-mean and mobile agent
Fig. 1 The structure of the data streams clustering based on
mobile agent
For statistical analysis we have consider different input
files for k-mean and mobile agent algorithm. For k-mean
Every subagent contains a clustering algorithm model.
algorithm input datasets are of only 2attributes, and for
When a subagent receives the data from the main agent, it
mobile agent project we have consider dataset up to 28
detects whether there is change of the data. If there is no
attributes and for handling dynamic input entry EXT folder
change in this dimension, it will send the message to the
is made in which different files save datasets, and that files
result agent to use the last time result. Then send the
are renamed according to time interval.
message to the main agent, the main agent will distribute
another dimension data to it. If there is change in this
dimension at this gap, the subagent creates new grids to
cluster the data. This method avoids creating large number
grids every time. All subagents execute clustering task
parallel.
VII. PROCESS OF THE CLUSTERING
ALGORITHM
The initial clustering part is not changed and we
improve the clustering part and add the programming of the
mobile agent including the main agent, the subagent and the
result agent. The clustering algorithm is described as follow:
Step l: Main agent gets the data in a gap time.
Step2: Main agent distributes the data into the subagents.
Each subagent gets one dimension data. If the data is d-
dimension, we need d subagents.
Step3: The subagent detects the data. If there is no change
compared with the last gap, go to step5. Or else go to step4.
Step4: The subagent creates new grids for the new data.
Step5: The subagent sends the result to the result agent and
sends the message to the main agent to require the new task. Fig. 3 Graphical results of k-mean and mobile agent
Step6: All the results of the subagent are collected in the
result agent. The result agent computes these results and gets
the final result. Go to step 1.
VIII. RESULTS IX. CONCLUSION
The data stream model is relevant to new classes of
applications involving massive data sets, such as web click
stream analysis and multimedia data analysis. To handle
such massive input data in this paper we suggested Density
147 | P a g e
5. Monali Patil, Vidya Chitre, Dipti Patil / International Journal of Engineering Research and Applications
(IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 4, July-August 2012, pp.144-148
stream algorithm with mobile agent technique and clustering
is carried out by k-mean. We improve D-Stream algorithm
using mobile agent in form of main agent, sub agent and
result agent, so computing techniques to cluster data in the
offline and online component become parallel and fast.
REFERENCES
[1] Zhang Hongyan, Liu Xiyu, “ A Data Streams
Clustering Algorithm Using DNA Computing
Techniques Based on Mobile Agent”, IEEE,
2000, Science and Technology Project of Shandong
Education Bureau.
[2] Sudipto Guha, Nina Mishrat, Rajeev Motwani,
Liadan O’Callaghan, “Clustering Data Streams”,
Department of Computer Science, Stanford
University, 2000.
[3] `Yixin Chen, Li Tu, “Density-Based Clustering for
Real-Time Stream Data”, Washington University in
St. Louis St. Louis, USA.
[4] WeiserMark The Computer for the Twenty-first
Century [J]. Scientific American, 1991,265 (3): 94-
100.
[5] Luo Ke ,Wang Lin, ” Data Streams Clustering
Algorithm Based on Grid and Particle Swarm
Optimization”, 2009 International Forum on
computer Science-Technology and Applications.
148 | P a g e