AN ADAPTIVE PSEUDORANDOM STEGO-CRYPTO TECHNIQUE FOR DATA COMMUNICATIONIJCNCJournal
The document describes a proposed adaptive pseudorandom stego-crypto technique for data communication. The technique combines stream cipher cryptography with a modified pseudorandom LSB substitution technique. This provides an evenly distributed cipher text while also enhancing security through increased brute force search times and reduced time complexity by avoiding collisions during random pixel selection. The proposed method uses three parameters that are optimized through experimental analysis to minimize distortions, increase cipher text scattering, and reduce collisions and time complexity. Results demonstrate the technique maintains good perceptual quality while improving upon previous methods.
Information theory deals with quantifying and transmitting information. It answers questions about data compression and transmission rates. Shannon showed that transmission rates can exceed the channel capacity as long as they remain below it, and that random processes have an irreducible complexity below which they cannot be compressed. Information theory relates to fields like computer science, probability theory, and communication theory. It had its beginnings in the early 20th century but was developed further after World War II by scientists like Shannon and Wiener.
The document proposes a new IPv4/IPv6 transition method called MBD-SIIT that uses a multi-homing approach. MBD-SIIT translates between IPv4 and IPv6 headers to allow communication between IPv4 and IPv6 networks. It aims to reduce packet overhead compared to tunneling and avoid the need to upgrade all edge nodes. The performance of MBD-SIIT is evaluated based on end-to-end delay, throughput, and round-trip time and shows improvements over traditional v4-to-v6 communication.
A new RSA public key encryption scheme with chaotic maps IJECEIAES
Public key cryptography has received great attention in the field of information exchange through insecure channels. In this paper, we combine the Dependent-RSA (DRSA) and chaotic maps (CM) to get a new secure cryptosystem, which depends on both integer factorization and chaotic maps discrete logarithm (CMDL). Using this new system, the scammer has to go through two levels of reverse engineering, concurrently, so as to perform the recovery of original text from the cipher-text has been received. Thus, this new system is supposed to be more sophisticated and more secure than other systems. We prove that our new cryptosystem does not increase the overhead in performing the encryption process or the decryption process considering that it requires minimum operations in both. We show that this new cryptosystem is more efficient in terms of performance compared with other encryption systems, which makes it more suitable for nodes with limited computational ability.
Modern-day computer security relies heavily on cryptography as a means to protect the data that we have
become increasingly reliant on. The main research in computer security domain is how to enhance the
speed of RSA algorithm. The computing capability of Graphic Processing Unit as a co-processor of the
CPU can leverage massive-parallelism. This paper presents a novel algorithm for calculating modulo
value that can process large power of numbers which otherwise are not supported by built-in data types.
First the traditional algorithm is studied. Secondly, the parallelized RSA algorithm is designed using
CUDA framework. Thirdly, the designed algorithm is realized for small prime numbers and large prime
number . As a result the main fundamental problem of RSA algorithm such as speed and use of poor or
small prime numbers that has led to significant security holes, despite the RSA algorithm's mathematical
soundness can be alleviated by this algorithm.
This document summarizes a student's semester project on implementing and comparing the NTRU public key cryptosystem to RSA. The project introduces NTRU, describes the student's Java implementation of the algorithm, and compares the speeds of key generation, encryption and decryption between NTRU and RSA. The results show that NTRU has significantly faster key generation and is faster overall, making it advantageous for applications where public key cryptography is used, such as key exchange and digital signatures.
A New Key Agreement Protocol Using BDP and CSP in Non Commutative GroupsEswar Publications
The available key agreement schemes using number theoretic, elliptic curves etc are common for cryptanalysts and associated security is vulnerable. This vulnerability further increases when we talk about modern efficient computers. So there is a need of providing new mechanism for key agreement with different properties so intruders get surprised and communication scenarios becomes stronger than before. In this paper, we propose a key agreement protocol which works in a non commutative group. We prove that our protocol meets the desired security attributes under the assumption that Conjugacy Search Problem and Decomposition Problem are hard in non commutative groups.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
AN ADAPTIVE PSEUDORANDOM STEGO-CRYPTO TECHNIQUE FOR DATA COMMUNICATIONIJCNCJournal
The document describes a proposed adaptive pseudorandom stego-crypto technique for data communication. The technique combines stream cipher cryptography with a modified pseudorandom LSB substitution technique. This provides an evenly distributed cipher text while also enhancing security through increased brute force search times and reduced time complexity by avoiding collisions during random pixel selection. The proposed method uses three parameters that are optimized through experimental analysis to minimize distortions, increase cipher text scattering, and reduce collisions and time complexity. Results demonstrate the technique maintains good perceptual quality while improving upon previous methods.
Information theory deals with quantifying and transmitting information. It answers questions about data compression and transmission rates. Shannon showed that transmission rates can exceed the channel capacity as long as they remain below it, and that random processes have an irreducible complexity below which they cannot be compressed. Information theory relates to fields like computer science, probability theory, and communication theory. It had its beginnings in the early 20th century but was developed further after World War II by scientists like Shannon and Wiener.
The document proposes a new IPv4/IPv6 transition method called MBD-SIIT that uses a multi-homing approach. MBD-SIIT translates between IPv4 and IPv6 headers to allow communication between IPv4 and IPv6 networks. It aims to reduce packet overhead compared to tunneling and avoid the need to upgrade all edge nodes. The performance of MBD-SIIT is evaluated based on end-to-end delay, throughput, and round-trip time and shows improvements over traditional v4-to-v6 communication.
A new RSA public key encryption scheme with chaotic maps IJECEIAES
Public key cryptography has received great attention in the field of information exchange through insecure channels. In this paper, we combine the Dependent-RSA (DRSA) and chaotic maps (CM) to get a new secure cryptosystem, which depends on both integer factorization and chaotic maps discrete logarithm (CMDL). Using this new system, the scammer has to go through two levels of reverse engineering, concurrently, so as to perform the recovery of original text from the cipher-text has been received. Thus, this new system is supposed to be more sophisticated and more secure than other systems. We prove that our new cryptosystem does not increase the overhead in performing the encryption process or the decryption process considering that it requires minimum operations in both. We show that this new cryptosystem is more efficient in terms of performance compared with other encryption systems, which makes it more suitable for nodes with limited computational ability.
Modern-day computer security relies heavily on cryptography as a means to protect the data that we have
become increasingly reliant on. The main research in computer security domain is how to enhance the
speed of RSA algorithm. The computing capability of Graphic Processing Unit as a co-processor of the
CPU can leverage massive-parallelism. This paper presents a novel algorithm for calculating modulo
value that can process large power of numbers which otherwise are not supported by built-in data types.
First the traditional algorithm is studied. Secondly, the parallelized RSA algorithm is designed using
CUDA framework. Thirdly, the designed algorithm is realized for small prime numbers and large prime
number . As a result the main fundamental problem of RSA algorithm such as speed and use of poor or
small prime numbers that has led to significant security holes, despite the RSA algorithm's mathematical
soundness can be alleviated by this algorithm.
This document summarizes a student's semester project on implementing and comparing the NTRU public key cryptosystem to RSA. The project introduces NTRU, describes the student's Java implementation of the algorithm, and compares the speeds of key generation, encryption and decryption between NTRU and RSA. The results show that NTRU has significantly faster key generation and is faster overall, making it advantageous for applications where public key cryptography is used, such as key exchange and digital signatures.
A New Key Agreement Protocol Using BDP and CSP in Non Commutative GroupsEswar Publications
The available key agreement schemes using number theoretic, elliptic curves etc are common for cryptanalysts and associated security is vulnerable. This vulnerability further increases when we talk about modern efficient computers. So there is a need of providing new mechanism for key agreement with different properties so intruders get surprised and communication scenarios becomes stronger than before. In this paper, we propose a key agreement protocol which works in a non commutative group. We prove that our protocol meets the desired security attributes under the assumption that Conjugacy Search Problem and Decomposition Problem are hard in non commutative groups.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
BLIND SIGNATURE SCHEME BASED ON CHEBYSHEV POLYNOMIALSIJNSA Journal
A blind signature scheme is a cryptographic protocol to obtain a valid signature for a message from a signer such that signer’s view of the protocol can’t be linked to the resulting message signature pair.This paper presents blind signature scheme using Chebyshev polynomials. The security of the given scheme depends upon the intractability of the integer factorization problem and discrete logarithms of Chebyshev polynomials.
Naver learning to rank question answer pairs using hrde-ltcNAVER Engineering
The automatic question answering (QA) task has long been considered a primary objective of artificial intelligence.
Among the QA sub-systems, we focused on answer-ranking part. In particular, we investigated a novel neural network architecture with additional data clustering module to improve the performance in ranking answer candidates which are longer than a single sentence. This work can be used not only for the QA ranking task, but also to evaluate the relevance of next utterance with given dialogue generated from the dialogue model.
In this talk, I'll present our research results (NAACL 2018), and also its potential use cases (i.e. fake news detection). Finally, I'll conclude by introducing some issues on previous research, and by introducing recent approach in academic.
In this paper we study of the MOR cryptosystem using camina group. We show that using the automorphism of the camina group one can build a secure MOR cryptosystem.
Steganography is the art of hiding information in plain sight, and in this tutorial, I'll show you how to use Steghide — a very simple command line tool to do just that. In addition, I'll go over a bit of conceptual background to help you understand what's going on behind the scenes. This is a tool that's simple, configurable, and only takes a few seconds to hide information in many file types. this is the dbatu lonere university students project presentation.
This document discusses various methods of data compression. It begins by defining compression as reducing the size of data while retaining its meaning. There are two main types of compression: lossless and lossy. Lossless compression allows for perfect reconstruction of the original data by removing redundant data. Common lossless methods include run-length encoding and Huffman coding. Lossy compression is used for images and video, and results in some loss of information. Popular lossy schemes are JPEG, MPEG, and MP3. The document then proceeds to describe several specific compression algorithms and their encoding and decoding processes.
(DL輪読)Matching Networks for One Shot LearningMasahiro Suzuki
1. Matching Networks is a neural network architecture proposed by DeepMind for one-shot learning.
2. The network learns to classify novel examples by comparing them to a small support set of examples, using an attention mechanism to focus on the most relevant support examples.
3. The network is trained using a meta-learning approach, where it learns to learn from small support sets to classify novel examples from classes not seen during training.
An Image representation using Compressive Sensing and Arithmetic Coding IJCERT
The demand for graphics and multimedia communication over intenet is growing day by day. Generally the coding efficiency achieved by CS measurements is below the widely used wavelet coding schemes (e.g., JPEG 2000). In the existing wavelet-based CS schemes, DWT is mainly applied for sparse representation and the correlation of DWT coefficients has not been fully exploited yet. To improve the coding efficiency, the statistics of DWT coefficients has been investigated. A novel CS-based image representation scheme has been proposed by considering the intra- and inter-similarity among DWT coefficients. Multi-scale DWT is first applied. The low- and high-frequency subbands of Multi-scale DWT are coded separately due to the fact that scaling coefficients capture most of the image energy. At the decoder side, two different recovery algorithms have been presented to exploit the correlation of scaling and wavelet coefficients well. In essence, the proposed CS-based coding method can be viewed as a hybrid compressed sensing schemes which gives better coding efficiency compared to other CS based coding methods.
A Proposal Analytical Model and Simulation of the Attacks in Routing Protocol...graphhoc
In this work we have devoted to some proposed analytical methods to simulate these attacks, and node mobility in MANET. The model used to simulate the malicious nodes mobility attacks is based on graphical theory, which is a tool for analyzing the behavior of nodes. The model used to simulate the Blackhole cooperative, Blackmail, Bandwidth Saturation and Overflow attacks is based on malicious nodes and the number of hops. We conducted a simulation of the attacks with a C implementation of the proposed mathematical models.
Bridging knowledge graphs_to_generate_scene_graphsWoen Yon Lai
The original paper link: https://arxiv.org/abs/2001.02314
* Disclaimer, I am not the author of this paper. I merely review this paper during a reading group discussion.
Application of bpcs steganography to wavelet compressed video (synopsis)Mumbai Academisc
This document discusses applying BPCS steganography techniques to embed secret information in wavelet compressed video. It begins with an abstract describing BPCS steganography, which embeds secret data in the bit-planes of an image without deteriorating image quality. It then provides an introduction to steganography and its applications for secure internet communication. The document discusses the design of the steganography technique, including its high embedding capacity of up to 50% of the original image size without increasing file size. It also covers security considerations like the RSA encryption algorithm.
This document summarizes structured prediction and structured large margin estimation approaches. It discusses how structured prediction can model complex, correlated outputs like sequences, trees, and matchings. It presents a min-max formulation that casts structured prediction as a linear program for inference, allowing joint training with large margin methods. This provides tractable learning for problems like conditional random fields, context-free grammars, and associative Markov networks.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Determining the Optimum Number of Paths for Realization of Multi-path Routing...TELKOMNIKA JOURNAL
This document proposes an algorithm for determining the optimal number of paths for multi-path routing in MPLS-TE networks. The algorithm involves:
1. Constructing a network graph and finding the set of shortest paths between nodes using Dijkstra's algorithm.
2. Determining the maximum flow that can be transmitted over each shortest path based on the minimum cut using the Ford-Fulkerson theorem.
3. Considering delay as another criterion and formulating the problem as a multi-criteria optimization to maximize flow while minimizing delay.
4. Defining a utility function to reduce the problem to a single-criterion by representing the quality of criteria on a 0 to 1 scale.
5.
In recent years, cooperative communication is a hot topic of research and it is a powerful physical layer
technique to combat fading in wireless relaying scenario. Concerning with the physical layer issues, in this
paper it is focussed on with providing a better space time block coding (STBC) scheme and incorporating it
in the cooperative relaying nodes to upgrade the system performance. Recently, the golden codes have
proven to exhibit a superior performance in a wireless MIMO (Multiple Input Multiple Output) scenario
than any other code. However, a serious limitation associated with it is its increased decoding complexity.
This paper attempts to resolve this challenge through suitable modification of golden code such that a less
complex sphere decoder could be used without much compromising the error rates. The decoder complexity
is analyzed through simulation and it proves to exhibit less complexity compared to the conventional
(Maximum likelihood) ML decoder. The single relay cooperative STBC consisting of source, relay and
destination are considered. The cooperative protocol strategy considered in the relay node is Decode and
forward (DF) protocol. The proposed modified golden code with less complex sphere decoder is
implemented in the nodes of the cooperative relaying system to achieve better performance in the system.
The simulation results have validated the effectiveness of the proposed scheme by offering better BER
performance, minimum outage probability and increased spectral efficiency compared to the non
cooperative transmission method.
Packet Classification using Support Vector Machines with String KernelsIJERA Editor
Since the inception of internet many methods have been devised to keep untrusted and malicious packets away
from a user’s system . The traffic / packet classification can be used
as an important tool to detect intrusion in the system. Using Machine Learning as an efficient statistical based
approach for classifying packets is a novel method in practice today . This paper emphasizes upon using an
advanced string kernel method within a support vector machine to classify packets .
There exists a paper related to a similar problem using Machine Learning [2]. But the researches mentioned in
their paper are not up-to date and doesn’t account for modern day
string kernels that are much more efficient . My work extends their research by introducing different approaches
to classify encrypted / unencrypted traffic / packets .
This document discusses parallelizing graph algorithms on GPUs for optimization. It summarizes previous work on parallel Breadth-First Search (BFS), All Pair Shortest Path (APSP), and Traveling Salesman Problem (TSP) algorithms. It then proposes implementing BFS, APSP, and TSP on GPUs using optimization techniques like reducing data transfers between CPU and GPU and modifying the algorithms to maximize GPU computing power and memory usage. The paper claims this will improve performance and speedup over CPU implementations. It focuses on optimizing graph algorithms for parallel GPU processing to accelerate applications involving large graph analysis and optimization problems.
This document describes a Contextualized Knowledge Repository (CKR) framework that allows for representing and reasoning with contextual knowledge on the Semantic Web. The CKR extends the description logic SROIQ-RL to include defeasible axioms in the global context. Defeasible axioms can be overridden by local contexts, allowing exceptions. The CKR is composed of two layers - a global context containing metadata and defeasible axioms, and local contexts containing object knowledge with references. An interpretation of a CKR maps local contexts to descriptions logic interpretations over the object vocabulary, respecting references between contexts.
Branch and-bound nearest neighbor searching over unbalanced trie-structured o...Michail Argyriou
Master presentation of Mike Argyriou in Technological University of Crete about
Branch and-bound nearest neighbor searching over unbalanced trie-structured overlays.
This document summarizes a research paper on topologies in unstructured peer-to-peer networks. The paper proposes a novel overlay formation algorithm that aims to improve search efficiency and effectiveness in unstructured P2P networks. The algorithm exploits peer similarity by connecting each peer to other most similar peers, aiming to satisfy properties of high clustering and low diameter. It also aims for searches to progressively route through peers similar to the destination peer. Simulation results show the algorithm outperforms other approaches in terms of query hop count, successful query ratio, overhead for resolving queries, and overhead of maintaining the network. Future work could investigate the impact of peer heterogeneity on the algorithm.
The document discusses location-based services and applications that utilize positioning systems. It describes several types of location-based services that are emerging, such as navigation assistance, geo-social networking, personalized advertising, and industrial monitoring. It also outlines key components of positioning systems, including network architecture, node interaction methods for measuring location like power profiling and angle of arrival, and sources of errors in indoor positioning.
BLIND SIGNATURE SCHEME BASED ON CHEBYSHEV POLYNOMIALSIJNSA Journal
A blind signature scheme is a cryptographic protocol to obtain a valid signature for a message from a signer such that signer’s view of the protocol can’t be linked to the resulting message signature pair.This paper presents blind signature scheme using Chebyshev polynomials. The security of the given scheme depends upon the intractability of the integer factorization problem and discrete logarithms of Chebyshev polynomials.
Naver learning to rank question answer pairs using hrde-ltcNAVER Engineering
The automatic question answering (QA) task has long been considered a primary objective of artificial intelligence.
Among the QA sub-systems, we focused on answer-ranking part. In particular, we investigated a novel neural network architecture with additional data clustering module to improve the performance in ranking answer candidates which are longer than a single sentence. This work can be used not only for the QA ranking task, but also to evaluate the relevance of next utterance with given dialogue generated from the dialogue model.
In this talk, I'll present our research results (NAACL 2018), and also its potential use cases (i.e. fake news detection). Finally, I'll conclude by introducing some issues on previous research, and by introducing recent approach in academic.
In this paper we study of the MOR cryptosystem using camina group. We show that using the automorphism of the camina group one can build a secure MOR cryptosystem.
Steganography is the art of hiding information in plain sight, and in this tutorial, I'll show you how to use Steghide — a very simple command line tool to do just that. In addition, I'll go over a bit of conceptual background to help you understand what's going on behind the scenes. This is a tool that's simple, configurable, and only takes a few seconds to hide information in many file types. this is the dbatu lonere university students project presentation.
This document discusses various methods of data compression. It begins by defining compression as reducing the size of data while retaining its meaning. There are two main types of compression: lossless and lossy. Lossless compression allows for perfect reconstruction of the original data by removing redundant data. Common lossless methods include run-length encoding and Huffman coding. Lossy compression is used for images and video, and results in some loss of information. Popular lossy schemes are JPEG, MPEG, and MP3. The document then proceeds to describe several specific compression algorithms and their encoding and decoding processes.
(DL輪読)Matching Networks for One Shot LearningMasahiro Suzuki
1. Matching Networks is a neural network architecture proposed by DeepMind for one-shot learning.
2. The network learns to classify novel examples by comparing them to a small support set of examples, using an attention mechanism to focus on the most relevant support examples.
3. The network is trained using a meta-learning approach, where it learns to learn from small support sets to classify novel examples from classes not seen during training.
An Image representation using Compressive Sensing and Arithmetic Coding IJCERT
The demand for graphics and multimedia communication over intenet is growing day by day. Generally the coding efficiency achieved by CS measurements is below the widely used wavelet coding schemes (e.g., JPEG 2000). In the existing wavelet-based CS schemes, DWT is mainly applied for sparse representation and the correlation of DWT coefficients has not been fully exploited yet. To improve the coding efficiency, the statistics of DWT coefficients has been investigated. A novel CS-based image representation scheme has been proposed by considering the intra- and inter-similarity among DWT coefficients. Multi-scale DWT is first applied. The low- and high-frequency subbands of Multi-scale DWT are coded separately due to the fact that scaling coefficients capture most of the image energy. At the decoder side, two different recovery algorithms have been presented to exploit the correlation of scaling and wavelet coefficients well. In essence, the proposed CS-based coding method can be viewed as a hybrid compressed sensing schemes which gives better coding efficiency compared to other CS based coding methods.
A Proposal Analytical Model and Simulation of the Attacks in Routing Protocol...graphhoc
In this work we have devoted to some proposed analytical methods to simulate these attacks, and node mobility in MANET. The model used to simulate the malicious nodes mobility attacks is based on graphical theory, which is a tool for analyzing the behavior of nodes. The model used to simulate the Blackhole cooperative, Blackmail, Bandwidth Saturation and Overflow attacks is based on malicious nodes and the number of hops. We conducted a simulation of the attacks with a C implementation of the proposed mathematical models.
Bridging knowledge graphs_to_generate_scene_graphsWoen Yon Lai
The original paper link: https://arxiv.org/abs/2001.02314
* Disclaimer, I am not the author of this paper. I merely review this paper during a reading group discussion.
Application of bpcs steganography to wavelet compressed video (synopsis)Mumbai Academisc
This document discusses applying BPCS steganography techniques to embed secret information in wavelet compressed video. It begins with an abstract describing BPCS steganography, which embeds secret data in the bit-planes of an image without deteriorating image quality. It then provides an introduction to steganography and its applications for secure internet communication. The document discusses the design of the steganography technique, including its high embedding capacity of up to 50% of the original image size without increasing file size. It also covers security considerations like the RSA encryption algorithm.
This document summarizes structured prediction and structured large margin estimation approaches. It discusses how structured prediction can model complex, correlated outputs like sequences, trees, and matchings. It presents a min-max formulation that casts structured prediction as a linear program for inference, allowing joint training with large margin methods. This provides tractable learning for problems like conditional random fields, context-free grammars, and associative Markov networks.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Determining the Optimum Number of Paths for Realization of Multi-path Routing...TELKOMNIKA JOURNAL
This document proposes an algorithm for determining the optimal number of paths for multi-path routing in MPLS-TE networks. The algorithm involves:
1. Constructing a network graph and finding the set of shortest paths between nodes using Dijkstra's algorithm.
2. Determining the maximum flow that can be transmitted over each shortest path based on the minimum cut using the Ford-Fulkerson theorem.
3. Considering delay as another criterion and formulating the problem as a multi-criteria optimization to maximize flow while minimizing delay.
4. Defining a utility function to reduce the problem to a single-criterion by representing the quality of criteria on a 0 to 1 scale.
5.
In recent years, cooperative communication is a hot topic of research and it is a powerful physical layer
technique to combat fading in wireless relaying scenario. Concerning with the physical layer issues, in this
paper it is focussed on with providing a better space time block coding (STBC) scheme and incorporating it
in the cooperative relaying nodes to upgrade the system performance. Recently, the golden codes have
proven to exhibit a superior performance in a wireless MIMO (Multiple Input Multiple Output) scenario
than any other code. However, a serious limitation associated with it is its increased decoding complexity.
This paper attempts to resolve this challenge through suitable modification of golden code such that a less
complex sphere decoder could be used without much compromising the error rates. The decoder complexity
is analyzed through simulation and it proves to exhibit less complexity compared to the conventional
(Maximum likelihood) ML decoder. The single relay cooperative STBC consisting of source, relay and
destination are considered. The cooperative protocol strategy considered in the relay node is Decode and
forward (DF) protocol. The proposed modified golden code with less complex sphere decoder is
implemented in the nodes of the cooperative relaying system to achieve better performance in the system.
The simulation results have validated the effectiveness of the proposed scheme by offering better BER
performance, minimum outage probability and increased spectral efficiency compared to the non
cooperative transmission method.
Packet Classification using Support Vector Machines with String KernelsIJERA Editor
Since the inception of internet many methods have been devised to keep untrusted and malicious packets away
from a user’s system . The traffic / packet classification can be used
as an important tool to detect intrusion in the system. Using Machine Learning as an efficient statistical based
approach for classifying packets is a novel method in practice today . This paper emphasizes upon using an
advanced string kernel method within a support vector machine to classify packets .
There exists a paper related to a similar problem using Machine Learning [2]. But the researches mentioned in
their paper are not up-to date and doesn’t account for modern day
string kernels that are much more efficient . My work extends their research by introducing different approaches
to classify encrypted / unencrypted traffic / packets .
This document discusses parallelizing graph algorithms on GPUs for optimization. It summarizes previous work on parallel Breadth-First Search (BFS), All Pair Shortest Path (APSP), and Traveling Salesman Problem (TSP) algorithms. It then proposes implementing BFS, APSP, and TSP on GPUs using optimization techniques like reducing data transfers between CPU and GPU and modifying the algorithms to maximize GPU computing power and memory usage. The paper claims this will improve performance and speedup over CPU implementations. It focuses on optimizing graph algorithms for parallel GPU processing to accelerate applications involving large graph analysis and optimization problems.
This document describes a Contextualized Knowledge Repository (CKR) framework that allows for representing and reasoning with contextual knowledge on the Semantic Web. The CKR extends the description logic SROIQ-RL to include defeasible axioms in the global context. Defeasible axioms can be overridden by local contexts, allowing exceptions. The CKR is composed of two layers - a global context containing metadata and defeasible axioms, and local contexts containing object knowledge with references. An interpretation of a CKR maps local contexts to descriptions logic interpretations over the object vocabulary, respecting references between contexts.
Branch and-bound nearest neighbor searching over unbalanced trie-structured o...Michail Argyriou
Master presentation of Mike Argyriou in Technological University of Crete about
Branch and-bound nearest neighbor searching over unbalanced trie-structured overlays.
This document summarizes a research paper on topologies in unstructured peer-to-peer networks. The paper proposes a novel overlay formation algorithm that aims to improve search efficiency and effectiveness in unstructured P2P networks. The algorithm exploits peer similarity by connecting each peer to other most similar peers, aiming to satisfy properties of high clustering and low diameter. It also aims for searches to progressively route through peers similar to the destination peer. Simulation results show the algorithm outperforms other approaches in terms of query hop count, successful query ratio, overhead for resolving queries, and overhead of maintaining the network. Future work could investigate the impact of peer heterogeneity on the algorithm.
The document discusses location-based services and applications that utilize positioning systems. It describes several types of location-based services that are emerging, such as navigation assistance, geo-social networking, personalized advertising, and industrial monitoring. It also outlines key components of positioning systems, including network architecture, node interaction methods for measuring location like power profiling and angle of arrival, and sources of errors in indoor positioning.
The document summarizes research on locally densest subgraph discovery. It discusses limitations of prior work that focuses on finding only the single densest subgraph or top-k dense subgraphs through a greedy approach. This may fail to fully characterize the graph's dense regions. The paper proposes defining a locally densest subgraph as one that is maximally ρ-compact, meaning it is connected and removal of nodes removes at least ρ times as many edges, ensuring it is not contained within a better subgraph. This formal definition can better represent different dense regions for applications like community detection.
This document provides an outline and overview of a presentation titled "Fault Tolerance in Wireless Sensor Networks Using Constrained Delaunay Triangulation". The presentation discusses using Constrained Delaunay Triangulation as a coverage strategy to provide fault tolerance, event reporting, and energy efficiency in wireless sensor networks. It outlines the proposed work, which includes deploying sensors, distributed greedy algorithm for coverage, Constrained Delaunay Triangulation algorithm, and selection of backup nodes. Simulation results are presented comparing the proposed approach to traditional approaches.
Scale-Free Networks to Search in Unstructured Peer-To-Peer NetworksIOSR Journals
This document discusses using scale-free networks to improve search efficiency in unstructured peer-to-peer networks. It proposes the EQUATOR architecture, which creates an overlay network topology based on the scale-free Barabasi-Albert model. Simulation results show that EQUATOR achieves good lookup performance comparable to the ideal Barabasi-Albert network, with low message overhead even under node churn. The scale-free topology allows random walks to efficiently locate resources by directing searches to high-degree "hub" nodes with greater knowledge of the network.
Presentation outline:
P2P Basics
Architecture
Lookup in P2P
Related work in P2P Lookup Protocols
Chord Protocol
Cluster based and Routing Balanced P2P Lookup Protocol
PathFinder
LiChord
Proposed P2P Lookup Model based on RCC8 and Scalable Bloom Filter
Future work for proposed P2P lookup model
A DISTRIBUTED ALGORITHM FOR THE DEAD-END PROBLEM IN WSNsPriyanka Jacob
The document outlines a project that aims to provide a solution to the dead-end problem in location-based routing for wireless sensor networks. It proposes an algorithm that can generate loop-free short paths with higher delivery ratios and lower energy consumption to handle large-scale networks. The algorithm uses greedy forwarding and perimeter forwarding to calculate paths and route around voids. It utilizes a shadow spreading function and cost spreading function to establish paths from shadow nodes to the base station.
Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recom...Daniel Valcarce
Slides of the presentation given at ECIR 2016 for the following paper:
Daniel Valcarce, Javier Parapar, Alvaro Barreiro: Efficient Pseudo-Relevance Feedback Methods for Collaborative Filtering Recommendation. ECIR 2016: 602-613
http://dx.doi.org/10.1007/978-3-319-30671-1_44
PR-146: CornerNet detecting objects as paired keypointsjaewon lee
The document summarizes the CornerNet object detection method. CornerNet detects objects as pairs of top-left and bottom-right corners using a convolutional neural network. It introduces corner pooling to better localize corners and achieves state-of-the-art performance among single-stage detectors. The method formulates object detection as an association problem between corners using embeddings and outperforms other detectors on standard benchmarks with an average inference time of 244ms per image.
https://github.com/telecombcn-dl/dlmm-2017-dcu
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Similar to Bichromatic Reverse Nearest Neighbours (12)
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
4. Motivation
1. Scalability
Limitations of Centralized Systems 2. Bottleneck
Moving objects Wide-range comm.
3. Low fault-tolerance
Interest objects P2P comm.
3/31/2013 (a) Centralized Systems (b) P2P Systems 4
5. Ultimate Aim
“… to harness collaborative power of peers
for spatial query processing in Mobile Environment”
3/31/2013 5
6. Problem definition
– Bichromatic Reverse Nearest Neighbour (BRNN)
Query point
Moving
i0 objects
Circle from the object
of interest to its Bichromatic
nearest moving object i1
Objects of
interest
io and i1 are the results of the RNN query from q
3/31/2013 6
7. Related work Propose:
– Tao, Y., Papadias, D., Lian, X.: Reverse
knn search in arbitrary dimensionality.
• Limitations: P2P
In: Proceedings of the Thirtieth – Centralized approach
international conference on Very large
data bases , VLDB '04. – Only deal with
monochromatic RNNs
H-(p0) Bichromatic
Half-space pruning. Any point that lies in the
shaded half- space H-(p0)
is always closer to p0 than to q and cannot be the
RNN for this reason.
3/31/2013 7
9. Definitions
Object of interest o
• Boundary region
• If B is closed, B: boundary polygon.
• The boundary polygon B is called a
tight polygon iff any object of interest
oi inside B regards q as the closest
moving object.
Boundary polygon B
3/31/2013 9
10. How to build a tight polygon The next processing peer is q4
outside C
p4 C(q, qq0)
TIGHT
Boundary polygon
Farthest vertex
q0
P = {p0, p1, …, p4,,p5, p6, …} is a priority queue Reflection point of q thru v0
3/31/2013 10
11. Construct the
polygon for filtering
objects of interest
3/31/2013 11
12. Exhaustive Search vs Centralized Search
Remarkably efficient in saving energy and time
3/31/2013 12
14. Optimized Search versa Exhaustive Search
Approximate accuracy rate with less mean latency
3/31/2013 14
15. Simulation framework
- Based on OMNet++ and MiXiM
- Using network interface card
which follows
IEEE 802.15.4 standard for
bluetooth networks
3/31/2013 15
16. Simulation framework
Simulation model
Parameters Value
Playground 87.1km2
No. of MOs 7600
No. of IOs 550
Cache Size 50
Expected no. of 2
queries/MO
Simulation time 30s
3/31/2013 16
19. Simulation Results – No. of Peers Pruned and Stop Hits
3/31/2013 Optimized Search Algorithm 19
20. Conclusion
• P2P Search significantly save communication cost and 43%
processing time compared to Centralized Search
• Optimized Search reduces the number of queried peers and then
response time while it maintains accuracy rate approximate to that
of Exhaustive Search.
• A practically feasible option for a large-scale and busy network
3/31/2013 20
23. Problem Statement
• Let P and O be two sets of points in the same data space.
• Given a point p є P, a BRNN query finds all the points o є O whose
nearest neighbours in P are p, namely, there does not exist any other
point p0 є P such that d(o, p0) < d(o, p).
3/31/2013 23
24. System Overview
Beacon message
Ack. message
Query
Peers
Node Query message
Reply message
Communication between Query node and Peers
Three phases:
1. Initialization and Peer Discovery
2. Constructing a Boundary Polygon and
Sending Queries
3. Pruning Interest Objects
3/31/2013 24
25. Definitions
• q, p
• P ={p1,…. pH}
is a priority queue of peers of q. |P| = H.
• Boundary line (b1)
•
3/31/2013 25
26. Lemma – How to build a tight polygon
If ∃pi є priority queue P, such that dist(q; pi) ≥ dist(q; vj), then B is a
tight polygon.
Put another way, we do not need to consider remaining peers left in the
queue P and stop creating the polygon.
3/31/2013 26
27. Simulation framework
Connection Manager
World
• Based on OMNeT++
Moving object
Object of interest
3/31/2013 27
Editor's Notes
To illustrate what the research is doing, let’s imagine a scenario. There is an earthquake. people aredisconnected from the centralized BS. A number of rescuers spread and help injured immobile victims in theaffected area. The only way of communication is asking their peer rescuers to locate victims. It is called P2P communication. In order to reduce redundancy, optimize human resources and maximize the support, a rescuer would rather go for a victim who needs him most, or in another way, who considers him as the closest. This is an example of Bichromatic RNN queries in Mobile P2P Networks. Here the term “bichromatic” means query nodes and points of interest are of two different types. In this example rescuers are moving objects playing as query points or peers. Immobile victims are static points of interest.There are many other potential practical applications of our research. For in-stance, in everyday applications, police force can communicate with each otherto distribute their team members to locations that needs them most for ex-ample, car accident sites or traffic congestion intersections.
Our research is motivated from two different aspects. First,the advances in mobile technology. Recently we have experienced the fast evolution in both mobile hardware and software. Your smart phone or tablet becomes more powerful than your parents' computer. Google has introduced its Nexus tablet featuring quad-core processor while LG and HTC have presented their quad-core phones running Ice Cream Sandwich (Google's Android 4.0) from the outset. It is the time to harness the computing power, intelligence and various functionalities of mobile devices
Scalability, bottleneck and low fault-tolerance are critical issuesof those centralized approaches, especially in large-scale systems. In particular, those systems only contain a central point of failure, which is likely to becorrupt in several scenarios. For example, on a battle field or natural disaster,the headquarters is vulnerable to unavailability or traffic congestion
Before going to the proposed algorithms, here are preliminary definitions q is the query nodep_1 is one of peers of qb1 is the boundary line or the perpendicular bisector between q and p1.b1 divides the whole plane into 2 half planes.H+(p1) is the positive half plane. Any object in this half plane is closer to q than p1. Similarly for the negative half plane H-(p1). Any object in this half plane is closer to p1 than q
* If any of these issues caused a schedule delay or need to be discussed further, include details in next slide.
* If any of these issues caused a schedule delay or need to be discussed further, include details in next slide.
* If any of these issues caused a schedule delay or need to be discussed further, include details in next slide.
An RNN query returns all objects which consider the query object as their nearest neighbour.Two types: Monochromatic RNN and Bichromatic RNN.
* If any of these issues caused a schedule delay or need to be discussed further, include details in next slide.