Bioinformatics may be defined as the field of science
in which biology, computer science, and information
technology merge to form a single discipline. Its ultimate
goal is to enable the discovery of new biological insights as
well as to create a global perspective from which unifying
principles in biology can be discerned by means of
bioinformatics tools for storing, retrieving, organizing and
analyzing biological data. Also most of these tools possess
very distinct features and capabilities making a direct
comparison difficult to be done. In this paper we propose
taxonomy for characterizing bioinformatics tools and briefly
surveys major bioinformatics tools under each categories.
Hopefully this study will stimulate other designers
and
experienced end users understand the details of particular
tool categories/tools, enabling them to make the best choices
for their particular research interests.
Prediction of Answer Keywords using Char-RNNIJECEIAES
Generating sequences of characters using a Recurrent Neural Network (RNN) is a tried and tested method for creating unique and context aware words, and is fundamental in Natural Language Processing tasks. These type of Neural Networks can also be used a question-answering system. The main drawback of most of these systems is that they work from a factoid database of information, and when queried about new and current information, the responses are usually bleak. In this paper, the author proposes a novel approach to finding answer keywords from a given body of news text or headline, based on the query that was asked, where the query would be of the nature of current affairs or recent news, with the use of Gated Recurrent Unit (GRU) variant of RNNs. Thus, this ensures that the answers provided are relevant to the content of query that was put forth.
Semi-supervised learning approach using modified self-training algorithm to c...IJECEIAES
Burst header packet flooding is an attack on optical burst switching (OBS) network which may cause denial of service. Application of machine learning technique to detect malicious nodes in OBS network is relatively new. As finding sufficient amount of labeled data to perform supervised learning is difficult, semi-supervised method of learning (SSML) can be leveraged. In this paper, we studied the classical self-training algorithm (ST) which uses SSML paradigm. Generally, in ST, the available true-labeled data (L) is used to train a base classifier. Then it predicts the labels of unlabeled data (U). A portion from the newly labeled data is removed from U based on prediction confidence and combined with L. The resulting data is then used to re-train the classifier. This process is repeated until convergence. This paper proposes a modified self-training method (MST). We trained multiple classifiers on L in two stages and leveraged agreement among those classifiers to determine labels. The performance of MST was compared with ST on several datasets and significant improvement was found. We applied the MST on a simulated OBS network dataset and found very high accuracy with a small number of labeled data. Finally we compared this work with some related works.
Traffic analysis is a process of great importance, when it comes in securing a network. This analysis can be classified in
different levels and one of most interest is Deep Packet Inspection (DPI). DPI is a very effective way of monitoring the network,
since it performs traffic control over mostly of the OSI model’s layers (from L3 to L7). Regular Expressions (RegExp) on the
other hand is used in computer science and can make use of a group of characters, in order to create a searching pattern. This
technique can be combined with a series of mathematical algorithms for helping the individual to quickly find out the search
pattern within a text and even replace it with another value.
In this paper, we aim to prove that the use of Regular Expressions is much more productive and effective when used for
creating matching rules needed in DPI. We design, test and put into comparison Regular Expression rules and compare it
against the conventional methods. In addition to the above, we have created a case study of detecting EternalBlue and
DoublePulsar threats, in order to point out the practical and realistic value of our proposal.
Drug Repurposing using Deep Learning on Knowledge GraphsDatabricks
Discovering new drugs is a lengthy and expensive process. This means that finding new uses for existing drugs can help create new treatments in less time and with less time. The difficulty is in finding these potential new uses.
How do we find these undiscovered uses for existing drugs?
We can unify the available structured and unstructured data sets into a knowledge graph. This is done by fusing the structured data sets, and performing named entity extraction on the unstructured data sets. Once this is done, we can use deep learning techniques to predict latent relationships.
In this talk we will cover:
Building the knowledge graph
Predicting latent relationships
Using the latent relationships to repurpose existing drugs
Full-Text Retrieval in Unstructured P2P Networks using Bloom Cast Efficientlyijsrd.com
Efficient and effective full-text retrieval in unstructured peer-to-peer networks remains a challenge in the research community. First, it is difficult, if not impossible, for unstructured P2P systems to effectively locate items with guaranteed recall. Second, existing schemes to improve search success rate often rely on replicating a large number of item replicas across the wide area network, incurring a large amount of communication and storage costs. In this paper, we propose BloomCast, an efficient and effective full-text retrieval scheme, in unstructured P2P networks. By leveraging a hybrid P2P protocol, BloomCast replicates the items uniformly at random across the P2P networks, achieving a guaranteed recall at a communication cost of O (N), where N is the size of the network. Furthermore, by casting Bloom Filters instead of the raw documents across the network, BloomCast significantly reduces the communication and storage costs for replication. Results show that BloomCast achieves an average query recall, which outperforms the existing WP algorithm by 18 percent, while BloomCast greatly reduces the search latency for query processing by 57 percent.
Prediction of Answer Keywords using Char-RNNIJECEIAES
Generating sequences of characters using a Recurrent Neural Network (RNN) is a tried and tested method for creating unique and context aware words, and is fundamental in Natural Language Processing tasks. These type of Neural Networks can also be used a question-answering system. The main drawback of most of these systems is that they work from a factoid database of information, and when queried about new and current information, the responses are usually bleak. In this paper, the author proposes a novel approach to finding answer keywords from a given body of news text or headline, based on the query that was asked, where the query would be of the nature of current affairs or recent news, with the use of Gated Recurrent Unit (GRU) variant of RNNs. Thus, this ensures that the answers provided are relevant to the content of query that was put forth.
Semi-supervised learning approach using modified self-training algorithm to c...IJECEIAES
Burst header packet flooding is an attack on optical burst switching (OBS) network which may cause denial of service. Application of machine learning technique to detect malicious nodes in OBS network is relatively new. As finding sufficient amount of labeled data to perform supervised learning is difficult, semi-supervised method of learning (SSML) can be leveraged. In this paper, we studied the classical self-training algorithm (ST) which uses SSML paradigm. Generally, in ST, the available true-labeled data (L) is used to train a base classifier. Then it predicts the labels of unlabeled data (U). A portion from the newly labeled data is removed from U based on prediction confidence and combined with L. The resulting data is then used to re-train the classifier. This process is repeated until convergence. This paper proposes a modified self-training method (MST). We trained multiple classifiers on L in two stages and leveraged agreement among those classifiers to determine labels. The performance of MST was compared with ST on several datasets and significant improvement was found. We applied the MST on a simulated OBS network dataset and found very high accuracy with a small number of labeled data. Finally we compared this work with some related works.
Traffic analysis is a process of great importance, when it comes in securing a network. This analysis can be classified in
different levels and one of most interest is Deep Packet Inspection (DPI). DPI is a very effective way of monitoring the network,
since it performs traffic control over mostly of the OSI model’s layers (from L3 to L7). Regular Expressions (RegExp) on the
other hand is used in computer science and can make use of a group of characters, in order to create a searching pattern. This
technique can be combined with a series of mathematical algorithms for helping the individual to quickly find out the search
pattern within a text and even replace it with another value.
In this paper, we aim to prove that the use of Regular Expressions is much more productive and effective when used for
creating matching rules needed in DPI. We design, test and put into comparison Regular Expression rules and compare it
against the conventional methods. In addition to the above, we have created a case study of detecting EternalBlue and
DoublePulsar threats, in order to point out the practical and realistic value of our proposal.
Drug Repurposing using Deep Learning on Knowledge GraphsDatabricks
Discovering new drugs is a lengthy and expensive process. This means that finding new uses for existing drugs can help create new treatments in less time and with less time. The difficulty is in finding these potential new uses.
How do we find these undiscovered uses for existing drugs?
We can unify the available structured and unstructured data sets into a knowledge graph. This is done by fusing the structured data sets, and performing named entity extraction on the unstructured data sets. Once this is done, we can use deep learning techniques to predict latent relationships.
In this talk we will cover:
Building the knowledge graph
Predicting latent relationships
Using the latent relationships to repurpose existing drugs
Full-Text Retrieval in Unstructured P2P Networks using Bloom Cast Efficientlyijsrd.com
Efficient and effective full-text retrieval in unstructured peer-to-peer networks remains a challenge in the research community. First, it is difficult, if not impossible, for unstructured P2P systems to effectively locate items with guaranteed recall. Second, existing schemes to improve search success rate often rely on replicating a large number of item replicas across the wide area network, incurring a large amount of communication and storage costs. In this paper, we propose BloomCast, an efficient and effective full-text retrieval scheme, in unstructured P2P networks. By leveraging a hybrid P2P protocol, BloomCast replicates the items uniformly at random across the P2P networks, achieving a guaranteed recall at a communication cost of O (N), where N is the size of the network. Furthermore, by casting Bloom Filters instead of the raw documents across the network, BloomCast significantly reduces the communication and storage costs for replication. Results show that BloomCast achieves an average query recall, which outperforms the existing WP algorithm by 18 percent, while BloomCast greatly reduces the search latency for query processing by 57 percent.
Semantic similarity and semantic relatedness
measure in particular is very important in the current scenario
due to the huge demand for natural language processing based
applications such as chatbots and information retrieval systems
such as knowledge base based FAQ systems. Current approaches
generally use similarity measures which does not use the context
sensitive relationships between the words. This leads to erroneous
similarity predictions and is not of much use in real life
applications. This work proposes a novel approach that gives an
accurate relatedness measure of any two words in a sentence by
taking their context into consideration. This context correction
results in a more accurate similarity prediction which results in
higher accuracy of information retrieval systems.
P2P DOMAIN CLASSIFICATION USING DECISION TREE ijp2p
The increasing interest in Peer-to-Peer systems (such as Gnutella) has inspired many research activities
in this area. Although many demonstrations have been performed that show that the performance of a
Peer-to-Peer system is highly dependent on the underlying network characteristics, much of the
evaluation of Peer-to-Peer proposals has used simplified models that fail to include a detailed model of
the underlying network. This can be largely attributed to the complexity in experimenting with a scalable
Peer-to-Peer system simulator built on top of a scalable network simulator. A major problem of
unstructured P2P systems is their heavy network traffic. In Peer-to-Peer context, a challenging problem
is how to find the appropriate peer to deal with a given query without overly consuming bandwidth?
Different methods proposed routing strategies of queries taking into account the P2P network at hand.
This paper considers an unstructured P2P system based on an organization of peers around Super-Peers
that are connected to Super-Super-Peer according to their semantic domains; in addition to integrating
Decision Trees in P2P architectures to produce Query-Suitable Super-Peers, representing a community
of peers where one among them is able to answer the given query. By analyzing the queries log file, a
predictive model that avoids flooding queries in the P2P network is constructed after predicting the
appropriate Super-Peer, and hence the peer to answer the query. A challenging problem in a schemabased Peer-to-Peer (P2P) system is how to locate peers that are relevant to a given query. In this paper,
architecture, based on (Super-)Peers is proposed, focusing on query routing. The approach to be
implemented, groups together (Super-)Peers that have similar interests for an efficient query routing
method. In such groups, called Super-Super-Peers (SSP), Super-Peers submit queries that are often
processed by members of this group. A SSP is a specific Super-Peer which contains knowledge about: 1.
its Super-Peers and 2. The other SSP. Knowledge is extracted by using data mining techniques (e.g.
Decision Tree algorithms) starting from queries of peers that transit on the network. The advantage of
this distributed knowledge is that, it avoids making semantic mapping between heterogeneous data
sources owned by (Super-)Peers, each time the system decides to route query to other (Super-) Peers.
The set of SSP improves the robustness in queries routing mechanism, and the scalability in P2P
Network. Compared with a baseline approach,the proposal architecture shows the effect of the data
mining with better performance in respect to response time and precision.
A consistent and efficient graphical User Interface Design and Querying Organ...CSCJournals
We propose a software layer called GUEDOS-DB upon Object-Relational Database Management System ORDMS. In this work we apply it in Molecular Biology, more precisely Organelle complete genome. We aim to offer biologists the possibility to access in a unified way information spread among heterogeneous genome databanks. In this paper, the goal is firstly, to provide a visual schema graph through a number of illustrative examples. The adopted, human-computer interaction technique in this visual designing and querying makes very easy for biologists to formulate database queries compared with linear textual query representation.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Genomic repeats detection using Boyer-Moore algorithm on Apache Spark Streaming TELKOMNIKA JOURNAL
Genomic repeats, i.e., pattern searching in the string processing process to find
repeated base pairs in the order of deoxyribonucleic acid (DNA), requires
a long processing time. This research builds a big-data computational
model to look for patterns in strings by modifying and implementing
the Boyer-Moore algorithm on Apache Spark Streaming for human DNA
sequences from the ensemble site. Moreover, we perform some experiments
on cloud computing by varying different specifications of computer clusters
with involving datasets of human DNA sequences. The results obtained show
that the proposed computational model on Apache Spark Streaming is faster
than standalone computing and parallel computing with multicore. Therefore,
it can be stated that the main contribution in this research, which is to develop
a computational model for reducing the computational costs, has been
achieved.
An Introduction to Bioinformatics
Drexel University INFO648-900-200915
A Presentation of Health Informatics Group 5
Cecilia Vernes
Joel Abueg
Kadodjomon Yeo
Sharon McDowell Hall
Terrence Hughes
Semantic similarity and semantic relatedness
measure in particular is very important in the current scenario
due to the huge demand for natural language processing based
applications such as chatbots and information retrieval systems
such as knowledge base based FAQ systems. Current approaches
generally use similarity measures which does not use the context
sensitive relationships between the words. This leads to erroneous
similarity predictions and is not of much use in real life
applications. This work proposes a novel approach that gives an
accurate relatedness measure of any two words in a sentence by
taking their context into consideration. This context correction
results in a more accurate similarity prediction which results in
higher accuracy of information retrieval systems.
P2P DOMAIN CLASSIFICATION USING DECISION TREE ijp2p
The increasing interest in Peer-to-Peer systems (such as Gnutella) has inspired many research activities
in this area. Although many demonstrations have been performed that show that the performance of a
Peer-to-Peer system is highly dependent on the underlying network characteristics, much of the
evaluation of Peer-to-Peer proposals has used simplified models that fail to include a detailed model of
the underlying network. This can be largely attributed to the complexity in experimenting with a scalable
Peer-to-Peer system simulator built on top of a scalable network simulator. A major problem of
unstructured P2P systems is their heavy network traffic. In Peer-to-Peer context, a challenging problem
is how to find the appropriate peer to deal with a given query without overly consuming bandwidth?
Different methods proposed routing strategies of queries taking into account the P2P network at hand.
This paper considers an unstructured P2P system based on an organization of peers around Super-Peers
that are connected to Super-Super-Peer according to their semantic domains; in addition to integrating
Decision Trees in P2P architectures to produce Query-Suitable Super-Peers, representing a community
of peers where one among them is able to answer the given query. By analyzing the queries log file, a
predictive model that avoids flooding queries in the P2P network is constructed after predicting the
appropriate Super-Peer, and hence the peer to answer the query. A challenging problem in a schemabased Peer-to-Peer (P2P) system is how to locate peers that are relevant to a given query. In this paper,
architecture, based on (Super-)Peers is proposed, focusing on query routing. The approach to be
implemented, groups together (Super-)Peers that have similar interests for an efficient query routing
method. In such groups, called Super-Super-Peers (SSP), Super-Peers submit queries that are often
processed by members of this group. A SSP is a specific Super-Peer which contains knowledge about: 1.
its Super-Peers and 2. The other SSP. Knowledge is extracted by using data mining techniques (e.g.
Decision Tree algorithms) starting from queries of peers that transit on the network. The advantage of
this distributed knowledge is that, it avoids making semantic mapping between heterogeneous data
sources owned by (Super-)Peers, each time the system decides to route query to other (Super-) Peers.
The set of SSP improves the robustness in queries routing mechanism, and the scalability in P2P
Network. Compared with a baseline approach,the proposal architecture shows the effect of the data
mining with better performance in respect to response time and precision.
A consistent and efficient graphical User Interface Design and Querying Organ...CSCJournals
We propose a software layer called GUEDOS-DB upon Object-Relational Database Management System ORDMS. In this work we apply it in Molecular Biology, more precisely Organelle complete genome. We aim to offer biologists the possibility to access in a unified way information spread among heterogeneous genome databanks. In this paper, the goal is firstly, to provide a visual schema graph through a number of illustrative examples. The adopted, human-computer interaction technique in this visual designing and querying makes very easy for biologists to formulate database queries compared with linear textual query representation.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Genomic repeats detection using Boyer-Moore algorithm on Apache Spark Streaming TELKOMNIKA JOURNAL
Genomic repeats, i.e., pattern searching in the string processing process to find
repeated base pairs in the order of deoxyribonucleic acid (DNA), requires
a long processing time. This research builds a big-data computational
model to look for patterns in strings by modifying and implementing
the Boyer-Moore algorithm on Apache Spark Streaming for human DNA
sequences from the ensemble site. Moreover, we perform some experiments
on cloud computing by varying different specifications of computer clusters
with involving datasets of human DNA sequences. The results obtained show
that the proposed computational model on Apache Spark Streaming is faster
than standalone computing and parallel computing with multicore. Therefore,
it can be stated that the main contribution in this research, which is to develop
a computational model for reducing the computational costs, has been
achieved.
An Introduction to Bioinformatics
Drexel University INFO648-900-200915
A Presentation of Health Informatics Group 5
Cecilia Vernes
Joel Abueg
Kadodjomon Yeo
Sharon McDowell Hall
Terrence Hughes
Computational Biology and BioinformaticsSharif Shuvo
Computational Biology and Bioinformatics is a rapidly developing multi-disciplinary field. The systematic achievement of data made possible by genomics and proteomics technologies has created a tremendous gap between available data and their biological interpretation.
A comprehensive presentation on cancer biochemistry including biochemical changes,carcinogens,mechanism of chemical carcinogenesis ,oncogenes & activation,monoclonal antibodies for cancer therapy,diet ,prevention &tumor markers
A description of how technology has changed the face of Biology, specially in the fields of genetics, proteomics, and evolution.
It includes a brief history, examples of usage, and a look into the future.
All new computers have multicore processors. To exploit this hardware parallelism for improved
performance, the predominant approach today is multithreading using shared variables and locks. This
approach has potential data races that can create a nondeterministic program. This paper presents a
promising new approach to parallel programming that is both lock-free and deterministic. The standard
forall primitive for parallel execution of for-loop iterations is extended into a more highly structured
primitive called a Parallel Operation (POP). Each parallel process created by a POP may read shared
variables (or shared collections) freely. Shared collections modified by a POP must be selected from a
special set of predefined Parallel Access Collections (PAC). Each PAC has several Write Modes that
govern parallel updates in a deterministic way. This paper presents an overview of a Prototype Library
that implements this POP-PAC approach for the C++ language, including performance results for two
benchmark parallel programs.
All new computers have multicore processors. To exploit this hardware parallelism for improved
perf
ormance, the predominant approach today is multithreading using shared variables and locks. This
approach has potential data races that can create a nondeterministic program. This paper presents a
promising new approach to parallel programming that is both
lock
-
free and deterministic. The standard
forall primitive for parallel execution of for
-
loop iterations is extended into a more highly structured
primitive called a Parallel Operation (POP). Each parallel process created by a POP may read shared
variable
s (or shared collections) freely. Shared collections modified by a POP must be selected from a
special set of predefined Parallel Access Collections (PAC). Each PAC has several Write Modes that
govern parallel updates in a deterministic way. This paper pre
sents an overview of a Prototype Library
that implements this POP
-
PAC approach for the C++ language, including performance results for two
benchmark parallel programs.
Performance Improvement of BLAST with Use of MSA Techniques to Search Ancesto...journal ijrtem
process in which instead comparing whole query sequence with database sequence it breaks
query sequence into small words and these words are used to align patterns. it uses heuristic method which
make it faster than earlier smith-waterman algorithm. But due small query sequence used for align in case of
very large database with complex queries it may perform poor. To remove this draw back we suggest by using
MSA tools which can filter database in by removing unnecessary sequences from data. This sorted data set then
applies to BLAST which can then indentify relationship among them i.e. HOMOLOGS, ORTHOLOGS,
PARALOGS. The proposed system can be further use to find relation among two persons or used to create
family tree. Ortholog is interesting for a wide range of bioinformatics analyses, including functional annotation,
phylogenetic inference, or genome evolution. This system describes and motivates the algorithm for predicting
orthologous relationships among complete genomes. The algorithm takes a pairwise approach, thus neither
requiring tree reconstruction nor reconciliation
Performance Improvement of BLAST with Use of MSA Techniques to Search Ancesto...IJRTEMJOURNAL
BLAST is most popular sequence alignment tool used to align bioinformatics patterns. It uses
local alignment process in which instead comparing whole query sequence with database sequence it breaks
query sequence into small words and these words are used to align patterns. it uses heuristic method which
make it faster than earlier smith-waterman algorithm. But due small query sequence used for align in case of
very large database with complex queries it may perform poor. To remove this draw back we suggest by using
MSA tools which can filter database in by removing unnecessary sequences from data. This sorted data set then
applies to BLAST which can then indentify relationship among them i.e. HOMOLOGS, ORTHOLOGS,
PARALOGS. The proposed system can be further use to find relation among two persons or used to create
family tree. Ortholog is interesting for a wide range of bioinformatics analyses, including functional annotation,
phylogenetic inference, or genome evolution. This system describes and motivates the algorithm for predicting
orthologous relationships among complete genomes. The algorithm takes a pairwise approach, thus neither
requiring tree reconstruction nor reconciliation
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Mapping of genes using cloud technologieseSAT Journals
Abstract Bioinformatics is a much updated topic for the recent researchers. There are various tasks of bioinformatics, like Alignment and comparison of DNA and RNA, Gene mapping on chromosomes, Protein structure prediction, gene finding from DNA sequences etc. Mapping of gene is the procedure of calculating the distance between the genes in chromosomes. In the real time application the medicine researchers goes for processing huge amount of data that may comes from different clusters or from different location. The mapping results also differ from each other with huge differences. In order to process a huge amount of data we need such a good platform which will serve without fail. Cloud computing is such a good technique that can be applied for various data as well as a huge amount of data to compare, distinguish, or simply for mapping the genes in chromosomes. In the field of cloud computing, Apache hadoop is a platform which provides a good platform for processing huge amount of data. Till now Apache hadoop is having the similar type of application in Face book & Yahoo. It carries the properties like fault tolerance which can be very useful in securing the data. In this paper we have discussed about the application of cloud technologies in Gene Mapping in chromosomes and as a real time application we have discussed about the Apache hadoop that can be applied for this purpose also. Index Terms: Bioinformatics, Cloud computing, Gene mapping, Protein structure prediction, Apache hadoop, Chromosome.
Now-a-days, Internet has become an important part of human’s life, a person
can shop, invest, and perform all the banking task online. Almost, all the organizations have
their own website, where customer can perform all the task like shopping, they only have to
provide their credit card details. Online banking and e-commerce organizations have been
experiencing the increase in credit card transaction and other modes of on-line transaction.
Due to this credit card fraud becomes a very popular issue for credit card industry, it causes
many financial losses for customer and also for the organization. Many techniques like
Decision Tree, Neural Networks, Genetic Algorithm based on modern techniques like
Artificial Intelligence, Machine Learning, and Fuzzy Logic have been already developed for
credit card fraud detection. In this paper, an evolutionary Simulated Annealing algorithm is
used to train the Neural Networks for Credit Card fraud detection in real-time scenario.
This paper shows how this technique can be used for credit card fraud detection and
present all the detailed experimental results found when using this technique on real world
financial data (data are taken from UCI repository) to show the effectiveness of this
technique. The algorithm used in this paper are likely beneficial for the organizations and
for individual users in terms of cost and time efficiency. Still there are many cases which are
misclassified i.e. A genuine customer is classified as fraud customer or vise-versa.
Wireless sensor networks (WSN) have been widely used in various applications.
In these networks nodes collect data from the attached sensors and send their data to a base
station. However, nodes in WSN have limited power supply in form of battery so the nodes
are expected to minimize energy consumption in order to maximize the lifetime of WSN. A
number of techniques have been proposed in the literature to reduce the energy
consumption significantly. In this paper, we propose a new clustering based technique
which is a modification of the popular LEACH algorithm. In this technique, first cluster
heads are elected using the improved LEACH algorithm as usual, and then a cluster of
nodes is formed based on the distance between node and cluster head. Finally, data from
node is transferred to cluster head. Cluster heads forward data, after applying aggregation,
to the cluster head that is closer to it than sink in forward direction or directly to the sink.
This reduction in distance travelled improves the performance over LEACH algorithm
significantly.
The next generation wireless networks comprises of mobile users moving
between heterogeneous networks, using terminals with multiple access interfaces and
services. The most important issue in such environment is ABC (Always Best Connected) i.e.
allowing the best connectivity to applications anywhere at any time. For always best
connectivity requirement various vertical handover strategies for decision making have
been proposed. This paper provides an overview of the most interesting and recent
strategies.
This paper presents the design and performance comparison of a two stage
operational amplifier topology using CMOS and BiCMOS technology. This conventional op
amp circuit was designed by using RF model of BSIM3V3 in 0.6 μm CMOS technology and
0.35 μm BiCMOS technology. Both the op amp circuits were designed and simulated,
analyzed and performance parameters are compared. The performance parameters such as
gain, phase margin, CMRR, PSRR, power consumption etc achieved are compared. Finally,
we conclude the suitability of CMOS technology over BiCMOS technology for low power
RF design.
In Cognitive Radio Networks (CRN), Cooperative Spectrum Sensing (CSS) is
used to improve performance of spectrum sensing techniques used for detection of licensed
(Primary) user’s signal. In CSS, the spectrum sensing information from multiple unlicensed
(Secondary) users are combined to take final decision about presence of primary signal. The
mixing techniques used to generate final decision about presence of PU’s signal are also
called as Fusion techniques / rules. The fusion techniques are further classified as data
fusion and decision fusion techniques. In data fusion technique all the secondary users
(SUs) share their raw information of spectrum detection like detected energy or other
statistical information, while in decision fusion technique all the SUs take their local
decisions and share the decision by sending ‘0’ or ‘1’ corresponding to absence and presence
of PU’s signal respectively. The rules used in decision fusion techniques are OR rule, AND
rule and K-out-of-N rule. The CSS is further classified as distributed CSS and centralized
CSS. In distributed CSS all the SUs share the spectrum detection information with each
other and by mixing the shared information; all the SUs take final decision individually. In
centralized CSS all the SUs send their detected information to a secondary base station /
central unit which combines the shared information and takes final decision. The secondary
base station shares the final decision with all the SUs in the CRN. This paper covers
overview of information fusion methods used for CSS and analysis of decision fusion rules
with simulation results.
ZigBee has been developed to support lower data rates and low power consuming
applications. This paper targets to analyze various parameters of ZigBee physical (PHY).
Performance of ZigBee PHY is evaluated on the basis of energy consumption in
transmitting and receiving mode and throughput. Effect of variation in network size is
studied on these performance attributes. Some modulation schemes are also compared and
the best modulation scheme is suggested with tradeoffs between different performance
metrics.
This paper gives a brief idea of the moving objects tracking and its application.
In sport it is challenging to track and detect motion of players in video frames. Task
represents optical flow analysis to do motion detection and particle filter to track players
and taking consideration of regions with movement of players in sports video. Optical flow
vector calculation gives motion of players in video frame. This paper presents improved
Luacs Kanade algorithm explained for optical flow computation for large displacement and
more accuracy in motion estimation.
A rapid progress is seen in the field of robotics both in educational and industrial
automation sectors. The Robotics education in particular is gaining technological advances
and providing more learning opportunities. In automotive sector, there is a necessity and
demand to automate daily human activities by robot. With such an advancement and
demand for robotics, the realization of a popular computer game will help students to learn
and acquire skills in the field of robotics. The computer game such as Pacman offers
challenges on both software and hardware fronts. In software, it provides challenges in
developing algorithms for a robot to escape from the pool of attacking robots and to develop
algorithms for multiple ghost robots to attack the Pacman. On the hardware front, it
provides a challenge to integrate various systems to realize the game. This project aims to
demonstrate the pacman game in real world as well as in simulation. For simulation
purpose Player/Stage is used to develop single-client and multi-client architectures. The
multi- client architecture in player/stage uses one global simulation proxy to which all the
robot models are connected. This reduces the overhead to manage multiple robots proxy.
The single-client architecture enables only two robot models to connect to the simulation
proxy. Multi-client approach offers flexibility to add sensors to each port which will be used
distinctly by the client attached to the respective robot. The robots are named as Pacman
and Ghosts, which try to escape and attack respectively. Use of Network Camera has been
done to detect the global positions of the robots and data is shared through inter-process
communication.
In Content-Based Image Retrieval (CBIR) systems, the visual contents of the
images in the database are took out and represented by multi-dimensional characteristic
vectors. A well known CBIR system that retrieves images by unsupervised method known
as cluster based image retrieval system. For enhancing the performance and retrieval rate
of CBIR system, we fuse the visual contents of an image. Recently, we developed two
cluster-based CBIR systems by fusing the scores of two visual contents of an image. In this
paper, we analyzed the performance of the two recommended CBIR systems at different
levels of precision using images of varying sizes and resolutions. We also compared the
performance of the recommended systems with that of the other two existing CBIR systems
namely UFM and CLUE. Experimentally, we find that the recommended systems
outperform the other two existing systems and one recommended system also comparatively
performed better in every resolution of image.
Information Systems and Networks are subjected to electronic attacks. When
network attacks hit, organizations are thrown into crisis mode. From the IT department to
call centers, to the board room and beyond, all are fraught with danger until the situation is
under control. Traditional methods which are used to overcome these threats (e.g. firewall,
antivirus software, password protection etc.) do not provide complete security to the system.
This encourages the researchers to develop an Intrusion Detection System which is capable
of detecting and responding to such events. This review paper presents a comprehensive
study of Genetic Algorithm (GA) based Intrusion Detection System (IDS). It provides a
brief overview of rule-based IDS, elaborates the implementation issues of Genetic Algorithm
and also presents a comparative analysis of existing studies.
Step by step operations by which we make a group of objects in which attributes
of all the objects are nearly similar, known as clustering. So, a cluster is a collection of
objects that acquire nearly same attribute values. The property of an object in a cluster is
similar to other objects in same cluster but different with objects of other clusters.
Clustering is used in wide range of applications like pattern recognition, image processing,
data analysis, machine learning etc. Nowadays, more attention has been put on categorical
data rather than numerical data. Where, the range of numerical attributes organizes in a
class like small, medium, high, and so on. There is wide range of algorithm that used to
make clusters of given categorical data. Our approach is to enhance the working on well-
known clustering algorithm k-modes to improve accuracy of algorithm. We proposed a new
approach named “High Accuracy Clustering Algorithm for Categorical datasets”.
Brain tumor is a malformed growth of cells within brain which may be
cancerous or non-cancerous. The term ‘malformed’ indicates the existence of tumor. The
tumor may be benign or malignant and it needs medical support for further classification.
Brain tumor must be detected, diagnosed and evaluated in earliest stage. The medical
problems become grave if tumor is detected at the later stage. Out of various technologies
available for diagnosis of brain tumor, MRI is the preferred technology which enables the
diagnosis and evaluation of brain tumor. The current work presents various clustering
techniques that are employed to detect brain tumor. The classification involves classification
of images into normal and malformed (if detected the tumor). The algorithm deals with
steps such as preprocessing, segmentation, feature extraction and classification of MR brain
images. Finally, the confirmatory step is specifying the tumor area by technique called
region of interest.
A Proxy signature scheme enables a proxy signer to sign a message on behalf of
the original signer. In this paper, we propose ECDLP based solution for chen et. al [1]
scheme. We describe efficient and secure Proxy multi signature scheme that satisfy all the
proxy requirements and require only elliptic curve multiplication and elliptic curve addition
which needs less computation overhead compared to modular exponentiations also our
scheme is withstand against original signer forgery and public key substitution attack.
Water marking has been proposed as a method to enhance data security. Text
water marking requires extreme care when embedding additional data within the images
because the additional information must not affect the image quality. Digital water marking
is a method through which we can authenticate images, videos and even texts. Add text
water mark and image water mark to your photos or animated image, protect your
copyright avoid unauthorized use. Water marking functions are not only authentication, but
also protection for such documents against malicious intentions to change such documents
or even claim the rights of such documents. Water marking scheme that hides water
marking in method, not affect the image quality. In this paper method of hiding a data using
LSB replacement technique is proposed.
Today among various medium of data transmission or storage our sensitive data
are not secured with a third-party, that we used to take help of. Cryptography plays an
important role in securing our data from malicious attack. This paper present a partial
image encryption based on bit-planes permutation using Peter De Jong chaotic map for
secure image transmission and storage. The proposed partial image encryption is a raw data
encryption method where bits of some bit-planes are shuffled among other bit-planes based
on chaotic maps proposed by Peter De Jong. By using the chaotic behavior of the Peter De
Jong map the position of all the bit-planes are permuted. The result of the several
experimental, correlation analysis and sensitivity test shows that the proposed image
encryption scheme provides an efficient and secure way for real-time image encryption and
decryption.
This paper presents a survey of Dependency Analysis of Service Oriented
Architecture (SOA) based systems. SOA presents newer aspects of dependency analysis due
to its different architectural style and programming paradigm. This paper surveys the
previous work taken on dependency analysis of service oriented systems. This study shows
the strengths and weaknesses of current approaches and tools available for dependency
analysis task in context of SOA. The main motivation of this work is to summarize the
recent approaches in this field of research, identify major issue and challenges in
dependency analysis of SOA based systems and motivate further research on this topic.
In this paper, proposed a novel implementation of a Soft-Core system using
micro-blaze processor with virtex-5 FPGA. Till now Hard-Core processors are used in
FPGA processor cores. Hard cores are a fixed gate-level IP functions within the FPGA
fabrics. Now the proposed processor is Soft-Core Processor, this is a microprocessor fully
described in software, usually in an HDL. This can be implemented by using EDK tool. In
this paper, developed a system which is having a micro-blaze processor is the combination
of both hardware & Software. By using this system, user can control and communicate all
the peripherals which are in the supported board by using Xilinx platform to develop an
embedded system. Implementing of Soft-Core process system with different peripherals like
UART interface, SPA flash interface, SRAM interface has to be designed using Xilinx
Embedded Development Kit (EDK) tools.
The article presents a simple algorithm to construct minimum spanning tree and
to find shortest path between pair of vertices in a graph. Our illustration includes the proof
of termination. The complexity analysis and simulation results have also been included.
Wimax technology has reshaped the framework of broadband wireless internet
service. It provides the internet service to unconnected or detached areas such as east South
Africa, rural areas of America and Asia region. Full duplex helpers employed with one of
the relay stations selection and indexing method that is Randomized Distributed Space Time
are used to expand the coverage area of primary Wimax station. The basic problem was
identified at cell edge due to weather conditions (rain, fog), insertion of destruction because
of multiple paths in the same communication channel and due to interference created by
other users in that communication. It is impractical task for the receiver station to decode
the transmitted signal successfully at the cell edges, which increases the high packet loss and
retransmissions. But Wimax is a outstanding technology which is used for improving the
quality of internet service and also it offers various services like Voice over Internet
Protocol, Video conferencing and Multimedia broadcast etc where a little delay in packet
transmission can cause a big loss in the communication. Even setup and initialization of
another Wimax station nearer to each other is not a good alternate, where any mobile
station can easily handover to another base station if it gets a strong signal from other one.
But in rural areas, for few numbers of customers, installation of base station nearer to each
other is costlier task. In this review article, we present a scheme using R-DSTC technique to
choose and select helpers (relay nodes) randomly to expand the coverage area and help to
mobile station as a helper to provide secure communication with base station. In this work,
we use full duplex helpers for better utilization of bandwidth.
Radio Frequency identification (RFID) technology has become emerging
technique for tracking and items identification. Depend upon the function; various RFID
technologies could be used. Drawback of passive RFID technology, associated to the range
of reading tags and assurance in difficult environmental condition, puts boundaries on
performance in the real life situation [1]. To improve the range of reading tags and
assurance, we consider implementing active backscattering tag technology. For making
mobiles of multiple radio standards in 4G network; the Software Defined Radio (SDR)
technology is used. Restrictions in Existing RFID technologies and SDR technology, can be
eliminated by the development and implementation of the Software Defined Radio (SDR)
active backscattering tag compatible with the EPC global UHF Class 1 Generation 2 (Gen2)
RFID standard. Such technology can be used for many of applications and services.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.