The explosion of mobile devices has fuelled the advancement of pervasive computing to provide personal assistance in this
information-driven world. Pervasive computing takes advantage of context-aware computing to track, use and adapt to contextual
information. The context that has attracted the attention of many researchers is the activity context. There are six major techniques that
are used to model activity context. These techniques are key-value, logic-based, ontology-based, object-oriented, mark-up schemes and
graphical. This paper analyses these techniques in detail by describing how each technique is implemented while reviewing their pros
and cons. The paper ends with a hybrid modeling method that fits heterogeneous environment while considering the entire of modeling
through data acquisition and utilization stages. The modeling stages of activity context are data sensation, data abstraction and
reasoning and planning. The work revealed that mark-up schemes and object-oriented are best applicable at the data sensation stage.
Key-value and object-oriented techniques fairly support data abstraction stage whereas the logic-based and ontology-based techniques
are the ideal techniques for reasoning and planning stage. In a distributed system, mark-up schemes are very useful in data
communication over a network and graphical technique should be used when saving context data into database.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
EFFICIENT SCHEMA BASED KEYWORD SEARCH IN RELATIONAL DATABASESIJCSEIT Journal
Keyword search in relational databases allows user to search information without knowing database
schema and using structural query language (SQL). In this paper, we address the problem of generating
and evaluating candidate networks. In candidate network generation, the overhead is caused by raising the
number of joining tuples for the size of minimal candidate network. To reduce overhead, we propose
candidate network generation algorithms to generate a minimum number of joining tuples according to the
maximum number of tuple set. We first generate a set of joining tuples, candidate networks (CNs). It is
difficult to obtain an optimal query processing plan during generating a number of joins. We also develop a
dynamic CN evaluation algorithm (D_CNEval) to generate connected tuple trees (CTTs) by reducing the
size of intermediate joining results. The performance evaluation of the proposed algorithms is conducted
on IMDB and DBLP datasets and also compared with existing algorithms.
Immune-Inspired Method for Selecting the Optimal Solution in Semantic Web Ser...IJwest
The increasing interest in developing efficient and effective optimization techniques has conducted researchers to turn their attention towards biology. It has been noticed that biology offers many clues for designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit the reachability of promising solutions without the existence of a central coordinator. In this paper we handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in the workflow as well as, the semantic similarity between these components. The experimental evaluation shows that the proposed approach has a better performance in comparison with other approaches such as the genetic algorithm.
Unstructured multidimensional array multimedia retrival model based xml databaseeSAT Journals
Abstract Unstructured Data derived from the thought of data warehouse, data cube and xml, this paper presents a new database structure model which organizes the unstructured data in a multidimensional data cube based on XML Database. In this data cube of XML, clustered data are stored in instance table. A leading data corresponding are stored in dimension table. The relational model is helpful to construct data model, but it lacks flexibility, now the new data model can complement the defect of relational model. When querying, a leading data is gained from dimension table of XML then receiving the unstructured data through XQuery. Thus we increase the flexibility of XML database. Keywords: XML, multimedia, Multi-dimension, Database, Retrieval Model, multidimensional array, unstructured data.
8 efficient multi-document summary generation using neural networkINFOGAIN PUBLICATION
From last few years online information is growing tremendously on World Wide Web or on user’s desktops and thus online information gains much more attention in the field of automatic text summarization. Text mining has become a significant research field as it produces valuable data from unstructured and large amount of texts. Summarization systems provide the possibility of searching the important keywords of the texts and so the consumer will expend less time on reading the whole document. Main objective of summarization system is to generate a new form which expresses the key meaning of the contained text. This paper study on various existing techniques with needs of novel Multi-Document summarization schemes. This paper is motivated by arising need to provide high quality summary in very short period of time. In proposed system, user can quickly and easily access correctly-developed summaries which expresses the key meaning of the contained text. The primary focus of this paper lies with thef_β-optimal merge function, a function recently presented here, that uses the weighted harmonic mean to discover a harmony in the middle of precision and recall. Proposed system utilizes Bisect K-means clustering to improve the time and Neural Networks to improve the accuracy of summary generated by NEWSUM algorithm.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
EFFICIENT SCHEMA BASED KEYWORD SEARCH IN RELATIONAL DATABASESIJCSEIT Journal
Keyword search in relational databases allows user to search information without knowing database
schema and using structural query language (SQL). In this paper, we address the problem of generating
and evaluating candidate networks. In candidate network generation, the overhead is caused by raising the
number of joining tuples for the size of minimal candidate network. To reduce overhead, we propose
candidate network generation algorithms to generate a minimum number of joining tuples according to the
maximum number of tuple set. We first generate a set of joining tuples, candidate networks (CNs). It is
difficult to obtain an optimal query processing plan during generating a number of joins. We also develop a
dynamic CN evaluation algorithm (D_CNEval) to generate connected tuple trees (CTTs) by reducing the
size of intermediate joining results. The performance evaluation of the proposed algorithms is conducted
on IMDB and DBLP datasets and also compared with existing algorithms.
Immune-Inspired Method for Selecting the Optimal Solution in Semantic Web Ser...IJwest
The increasing interest in developing efficient and effective optimization techniques has conducted researchers to turn their attention towards biology. It has been noticed that biology offers many clues for designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit the reachability of promising solutions without the existence of a central coordinator. In this paper we handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in the workflow as well as, the semantic similarity between these components. The experimental evaluation shows that the proposed approach has a better performance in comparison with other approaches such as the genetic algorithm.
Unstructured multidimensional array multimedia retrival model based xml databaseeSAT Journals
Abstract Unstructured Data derived from the thought of data warehouse, data cube and xml, this paper presents a new database structure model which organizes the unstructured data in a multidimensional data cube based on XML Database. In this data cube of XML, clustered data are stored in instance table. A leading data corresponding are stored in dimension table. The relational model is helpful to construct data model, but it lacks flexibility, now the new data model can complement the defect of relational model. When querying, a leading data is gained from dimension table of XML then receiving the unstructured data through XQuery. Thus we increase the flexibility of XML database. Keywords: XML, multimedia, Multi-dimension, Database, Retrieval Model, multidimensional array, unstructured data.
8 efficient multi-document summary generation using neural networkINFOGAIN PUBLICATION
From last few years online information is growing tremendously on World Wide Web or on user’s desktops and thus online information gains much more attention in the field of automatic text summarization. Text mining has become a significant research field as it produces valuable data from unstructured and large amount of texts. Summarization systems provide the possibility of searching the important keywords of the texts and so the consumer will expend less time on reading the whole document. Main objective of summarization system is to generate a new form which expresses the key meaning of the contained text. This paper study on various existing techniques with needs of novel Multi-Document summarization schemes. This paper is motivated by arising need to provide high quality summary in very short period of time. In proposed system, user can quickly and easily access correctly-developed summaries which expresses the key meaning of the contained text. The primary focus of this paper lies with thef_β-optimal merge function, a function recently presented here, that uses the weighted harmonic mean to discover a harmony in the middle of precision and recall. Proposed system utilizes Bisect K-means clustering to improve the time and Neural Networks to improve the accuracy of summary generated by NEWSUM algorithm.
Semantic similarity and semantic relatedness
measure in particular is very important in the current scenario
due to the huge demand for natural language processing based
applications such as chatbots and information retrieval systems
such as knowledge base based FAQ systems. Current approaches
generally use similarity measures which does not use the context
sensitive relationships between the words. This leads to erroneous
similarity predictions and is not of much use in real life
applications. This work proposes a novel approach that gives an
accurate relatedness measure of any two words in a sentence by
taking their context into consideration. This context correction
results in a more accurate similarity prediction which results in
higher accuracy of information retrieval systems.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...mlaij
Developments in natural language processing (NLP) techniques, convolutional neural networks (CNNs), and long-short- term memory networks (LSTMs) allow for a state-of-the-art automated system capable of predicting the status (pass/fail) of congressional roll call votes. The paper introduces a custom hybrid model labeled "Predict Text Classification Network" (PTCN), which inputs legislation and outputs a prediction of the document's classification (pass/fail). The convolutional layers and the LSTM layers automatically recognize features from the input data's latent space. The PTCN's custom architecture provides elements enabling adaptation to the input's variance from adjustment to the kernel weights over time. On the document level, the model reported an average evaluation of 67.32% using 10-fold crossvalidation. The results suggest that the model can recognize congressional voting behaviors from the associated legislation's language. Overall, the PTCN provides a solution with competitive performance to related systems targeting congressional roll call votes.
Data mining is the knowledge discovery in databases and the gaol is to extract patterns and knowledge from
large amounts of data. The important term in data mining is text mining. Text mining extracts the quality
information highly from text. Statistical pattern learning is used to high quality information. High –quality in
text mining defines the combinations of relevance, novelty and interestingness. Tasks in text mining are text
categorization, text clustering, entity extraction and sentiment analysis. Applications of natural language
processing and analytical methods are highly preferred to turn
Enhanced Privacy Preserving Accesscontrol in Incremental Datausing Microaggre...rahulmonikasharma
In microdata releases, main task is to protect the privacy of data subjects. Microaggregation technique use to disclose the limitation at protecting the privacy of microdata. This technique is an alternative to generalization and suppression, which use to generate k-anonymous data sets. In this dataset, identity of each subject is hidden within a group of k subjects. Microaggregation perturbs the data and additional masking allows refining data utility in many ways, like increasing data granularity, to avoid discretization of numerical data, to reduce the impact of outliers. If the variability of the private data values in a group of k subjects is too small, k-anonymity does not provide protection against attribute disclosure. In this work Role based access control is assumed. The access control policies define selection predicates to roles. Then use the concept of imprecision bound for each permission to define a threshold on the amount of imprecision that can be tolerated. So the proposed approach reduces the imprecision for each selection predicate. Anonymization is carried out only for the static relational table in the existing papers. Privacy preserving access control mechanism is applied to the incremental data.
With the rapid development in Geographic Information Systems (GISs) and their applications, more and
more geo-graphical databases have been developed by different vendors. However, data integration and
accessing is still a big problem for the development of GIS applications as no interoperability exists among
different spatial databases. In this paper we propose a unified approach for spatial data query. The paper
describes a framework for integrating information from repositories containing different vector data sets
formats and repositories containing raster datasets. The presented approach converts different vector data
formats into a single unified format (File Geo-Database “GDB”). In addition, we employ “metadata” to
support a wide range of users’ queries to retrieve relevant geographic information from heterogeneous and
distributed repositories. Such an employment enhances both query processing and performance.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...IJDKP
Many applications of automatic document classification require learning accurately with little training
data. The semi-supervised classification technique uses labeled and unlabeled data for training. This
technique has shown to be effective in some cases; however, the use of unlabeled data is not always
beneficial.
On the other hand, the emergence of web technologies has originated the collaborative development of
ontologies. In this paper, we propose the use of ontologies in order to improve the accuracy and efficiency
of the semi-supervised document classification.
We used support vector machines, which is one of the most effective algorithms that have been studied for
text. Our algorithm enhances the performance of transductive support vector machines through the use of
ontologies. We report experimental results applying our algorithm to three different datasets. Our
experiments show an increment of accuracy of 4% on average and up to 20%, in comparison with the
traditional semi-supervised model.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Semantic similarity and semantic relatedness
measure in particular is very important in the current scenario
due to the huge demand for natural language processing based
applications such as chatbots and information retrieval systems
such as knowledge base based FAQ systems. Current approaches
generally use similarity measures which does not use the context
sensitive relationships between the words. This leads to erroneous
similarity predictions and is not of much use in real life
applications. This work proposes a novel approach that gives an
accurate relatedness measure of any two words in a sentence by
taking their context into consideration. This context correction
results in a more accurate similarity prediction which results in
higher accuracy of information retrieval systems.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...mlaij
Developments in natural language processing (NLP) techniques, convolutional neural networks (CNNs), and long-short- term memory networks (LSTMs) allow for a state-of-the-art automated system capable of predicting the status (pass/fail) of congressional roll call votes. The paper introduces a custom hybrid model labeled "Predict Text Classification Network" (PTCN), which inputs legislation and outputs a prediction of the document's classification (pass/fail). The convolutional layers and the LSTM layers automatically recognize features from the input data's latent space. The PTCN's custom architecture provides elements enabling adaptation to the input's variance from adjustment to the kernel weights over time. On the document level, the model reported an average evaluation of 67.32% using 10-fold crossvalidation. The results suggest that the model can recognize congressional voting behaviors from the associated legislation's language. Overall, the PTCN provides a solution with competitive performance to related systems targeting congressional roll call votes.
Data mining is the knowledge discovery in databases and the gaol is to extract patterns and knowledge from
large amounts of data. The important term in data mining is text mining. Text mining extracts the quality
information highly from text. Statistical pattern learning is used to high quality information. High –quality in
text mining defines the combinations of relevance, novelty and interestingness. Tasks in text mining are text
categorization, text clustering, entity extraction and sentiment analysis. Applications of natural language
processing and analytical methods are highly preferred to turn
Enhanced Privacy Preserving Accesscontrol in Incremental Datausing Microaggre...rahulmonikasharma
In microdata releases, main task is to protect the privacy of data subjects. Microaggregation technique use to disclose the limitation at protecting the privacy of microdata. This technique is an alternative to generalization and suppression, which use to generate k-anonymous data sets. In this dataset, identity of each subject is hidden within a group of k subjects. Microaggregation perturbs the data and additional masking allows refining data utility in many ways, like increasing data granularity, to avoid discretization of numerical data, to reduce the impact of outliers. If the variability of the private data values in a group of k subjects is too small, k-anonymity does not provide protection against attribute disclosure. In this work Role based access control is assumed. The access control policies define selection predicates to roles. Then use the concept of imprecision bound for each permission to define a threshold on the amount of imprecision that can be tolerated. So the proposed approach reduces the imprecision for each selection predicate. Anonymization is carried out only for the static relational table in the existing papers. Privacy preserving access control mechanism is applied to the incremental data.
With the rapid development in Geographic Information Systems (GISs) and their applications, more and
more geo-graphical databases have been developed by different vendors. However, data integration and
accessing is still a big problem for the development of GIS applications as no interoperability exists among
different spatial databases. In this paper we propose a unified approach for spatial data query. The paper
describes a framework for integrating information from repositories containing different vector data sets
formats and repositories containing raster datasets. The presented approach converts different vector data
formats into a single unified format (File Geo-Database “GDB”). In addition, we employ “metadata” to
support a wide range of users’ queries to retrieve relevant geographic information from heterogeneous and
distributed repositories. Such an employment enhances both query processing and performance.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...IJDKP
Many applications of automatic document classification require learning accurately with little training
data. The semi-supervised classification technique uses labeled and unlabeled data for training. This
technique has shown to be effective in some cases; however, the use of unlabeled data is not always
beneficial.
On the other hand, the emergence of web technologies has originated the collaborative development of
ontologies. In this paper, we propose the use of ontologies in order to improve the accuracy and efficiency
of the semi-supervised document classification.
We used support vector machines, which is one of the most effective algorithms that have been studied for
text. Our algorithm enhances the performance of transductive support vector machines through the use of
ontologies. We report experimental results applying our algorithm to three different datasets. Our
experiments show an increment of accuracy of 4% on average and up to 20%, in comparison with the
traditional semi-supervised model.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Empowering Users: UX Lesson from Game DesignAdityo Pratomo
This is a presentation from a recent UX ID meetup, where I pointed several lesson from game design that any UX designers can learn from in terms of empowering the user of their products
Applying Utaut and Innovation Diffusion Theory to Understand the Rapid Adopti...Editor IJCATR
M-PESA, the world-leading mobile money system has transformed lives and livelihoods in Kenya and beyond. Financial
inclusion for the marginalized in emerging markets is now feasible and achievable. Mobile money promises a more scalable and
cheaper alternative to the large unbanked populace than conventional banking. In the recent years, Sub-Saharan Africa has rolled out a
number of practical technology-driven innovative products leading to more and more cashless transactions. One such product is MPESA;
a mobile-based financial innovation that has achieved unprecedented growth since its inception in March 2007 by the mobile
network operator, Safaricom. In spite of its tantalizing potential, one major challenge is how to optimally capture the market. This
paper analyses the M-PESA ecosystem, by building theoretical linkages between two main theories; i) Innovation Diffusion Theory
and ii) Unified Theory of Acceptance and Use of Technology. The questions the author is addressing are: Which factors are
responsible for M-PESA’s rapid adoption? How does Safaricom maintain its strong grip as a Mobile Network Operator in the financial
sector?
Formal Models for Context Aware ComputingEditor IJCATR
Context-aware computing refers to a general class of mobile systems that can sense their physical environment, and
adapt their behavior accordingly. In this paper we seek to develop a systematic understanding of context-aware computing by
constructing a formal model and notation for expressing context-aware computations. This discussion is followed by a
description and comparison of current context modeling and reasoning techniques.
Information Upload and retrieval using SP Theory of IntelligenceINFOGAIN PUBLICATION
In today’s technology Cloud computing has become an important aspect and storing of data on cloud is of high importance as the need for virtual space to store massive amount of data has grown during the years. However time taken for uploading and downloading is limited by processing time and thus need arises to solve this issue to handle large data and their processing. Another common problem is de duplication. With the cloud services growing at a rapid rate it is also associated by increasing large volumes of data being stored on remote servers of cloud. But most of the remote stored files are duplicated because of uploading the same file by different users at different locations. A recent survey by EMC says about 75% of the digital data present on cloud are duplicate copies. To overcome these two problems in this paper we are using SP theory of intelligence using lossless compression of information, which makes the big data smaller and thus reduces the problems in storage and management of large amounts of data.
Following the user’s interests in mobile context aware recommender systemsBouneffouf Djallel
The wide development of mobile applications provides a considerable amount of data of all types (images, texts, sounds, videos, etc.). In this sense, Mobile Context-aware Recommender Systems (MCRS) suggest the user suitable information depending on her/his situation and interests. Two key questions have to be considered 1) how to recommend the user information that follows his/her interests evolution? 2) how to model the user’s situation and its related interests? To the best of our knowledge, no existing work proposing a MCRS tries to answer both questions as we do. This paper describes an ongoing work on the implementation of a MCRS based on the hybrid-ε-greedy algorithm we propose, which combines the standard ε-greedy algorithm and both content-based filtering and case-based reasoning techniques.
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...acijjournal
Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucial
step in analysing structured data and in finding association relationship between items. This stands as an
elementary foundation to supervised learning, which encompasses classifier and feature extraction
methods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of the
structured data in scientific domain are voluminous. Processing such kind of data requires state of the art
computing machines. Setting up such an infrastructure is expensive. Hence a distributed environment
such as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one of
the cluster frameworks in distributed environment that helps by distributing voluminous data across a
number of nodes in the framework. This paper focuses on map/reduce design and implementation of
Apriori algorithm for structured data analysis.
Analysis and assessment software for multi-user collaborative cognitive radi...IJECEIAES
Computer simulations are without a doubt a useful methodology that allows to explore research queries and develop prototypes at lower costs and timeframes than those required in hardware processes. The simulation tools used in cognitive radio networks (CRN) are undergoing an active process. Currently, there is no stable simulator that enables to characterize every element of the cognitive cycle and the available tools are a framework for discrete-event software. This work presents the spectral mobility simulator in CRN called “App MultiColl-DCRN”, developed with MATLAB’s app designer. In contrast with other frameworks, the simulator uses real spectral occupancy data and simultaneously analyzes features regarding spectral mobility, decision-making, multi-user access, collaborative scenarios and decentralized architectures. Performance metrics include bandwidth, throughput level, number of failed handoffs, number of total handoffs, number of handoffs with interference, number of anticipated handoffs and number of perfect handoffs. The assessment of the simulator involves three scenarios: the first and second scenarios present a collaborative structure using the multi-criteria optimization and compromise solution (VIKOR) decision-making model and the naïve Bayes prediction technique respectively. The third scenario presents a multi-user structure and uses simple additive weighting (SAW) as a decision-making technique. The present development represents a contribution in the cognitive radio network field since there is currently no software with the same features.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGIJwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share
their interests without being at the same geographical location. With the great and rapid growth of Social
Media sites such as Facebook, LinkedIn, Twitter...etc. causes huge amount of user-generated content.
Thus, the improvement in the information quality and integrity becomes a great challenge to all social
media sites, which allows users to get the desired content or be linked to the best link relation using
improved search / link technique. So introducing semantics to social networks will widen up the
representation of the social networks.
Control systems and computer science are two distinct and important fields
of engineering. The development of cloud computing in computer science
has become an enabler for the widely used controller in control systems to
migrate to the cloud and has created a new field of research in cloud-based
control systems (CCS). The paper used the systematic literature review
approach to obtain insight into current CCS research. The objectives include
a review in areas such as the demographics, topics of the research,
evaluation method, and application domain. To that end, systematic
literature review (SLR) has been conducted. The study obtained 64 primary
studies from 581 articles. The CCS has a distinct characteristic; despite the
fact that the cloud and network dynamics system, when coupled with the
controlled plant, is inherently nonlinear, research efforts have used linear
models with optimal control to approach it successfully in a limited case of
control objectives. Furthermore, cloud-centric and cloud-fog network
architecture approaches are considered in the studies—whereas, the
quantitative method mainly uses simulation and discussion. Finally, the SLR
summarizes open challenges for CCS in the future.
SYSTEMATIC LITERATURE REVIEW ON RESOURCE ALLOCATION AND RESOURCE SCHEDULING I...ijait
he objective the work is intend to highlight the key features and afford finest future directions in the
research community of Resource Allocation, Resource Scheduling and Resource management from 2009 to
2016. Exemplifying how research on Resource Allocation, Resource Scheduling and Resource management
has progressively increased in the past decade by inspecting articles, papers from scientific and standard
publications. Survey materialized in three fold process. Firstly, investigate on the amalgamation of
Resource Allocation, Resource Scheduling and then proceeded with Resource management. Secondly, we
performed a structural analysis on different author’s prominent contributions in the form of tabulation by
categories and graphical representation. Thirdly, huddle with conceptual similarity in the field and also
impart a summary on all resource allocations. In cloud computing environments, there are two players:
cloud providers and cloud users. On one hand, providers hold massive computing resources in their large
datacenters and rent resources out to users on a per-usage basis. On the other hand, there are users who
An IoT platform is a fusion of physical resources such as connectors, wireless networks, smart phones and computer technologies viz; protocols, web service technologies, etc. the heterogeneity of used technologies generates a high cost at interoperability level. This paper presents a generic meta-model of IoT interoperability based on different organizational concepts such as service, compilation, activity and architectures. This model called M2IOTI, defines a very simple description of the IoT interoperability. M2IOTI is a meta-model of IoT interoperability by which one can build a model of IoT interoperability with different forms of organizations. We show that this meta-model allows for connected objects heterogeneity in semantic technologies, activities, services and architectures, in order to offer a high level at IoT interoperability. We also introduce the concept PSM which uses the same conceptual model to describe each interoperability model already existed. Such as conceptual, behavioral, semantic and dynamic models. We have also proposed a PIM model that regroups all the common concepts between the PSMs interoperability models.
Cobe framework cloud ontology blackboard environment for enhancing discovery ...ijccsa
The new relatively concept of cloud computing & its associated methodologies has many advantages in the
world of today. Such advantages range between providing solutions for integration of the miscellaneous
systems & presenting as well guarantees for distribution of searching means & integration of software
tools which are used by consumers & different providers. In this paper, we have constructed an ontologybased
cloud framework with a view to identifying its external agent’s interoperability. The proposed
framework has been designed using the blackboard design style. This framework is composed of mainly
two components: controller and cloud ontology blackboard environment. The function of the controller is
to interact with consumers after receipt of the subject request where it spontaneously uses the ontology
base to distribute it & constitute the required related responses whereas the function of the second
framework component is to interact with different cloud providers and systems, using the meta-ontology
framework to restructure data via using AI reasoning tools and map them to its corresponding
redistributed request. Finally, E-tourism case study can be applicable will be explored.
Proactive Intelligent Home System Using Contextual Information and Neural Net...IJERA Editor
Nowadays, cities around the world intend to use information technology to improve the lives of their citizens.
Future smart cities will incorporate digital data and technology to interact differently with their human
inhabitants.
Among the key component of a smart city, we find the smart home component. It is an autonomic environment
that can provide various smart services by considering the user’s context information. Several methods are used
in context-aware system to provide such services. In this paper, we propose an approach to offer the most
relevant services to the user according to any significant change of his context environment. The proposed
approach is based on the use of context history information together with user profiling and machine learning
techniques. Experimentations show that the proposed solution can efficiently provide the most useful services to
the user in an intelligent home environment.
Similar to Activity Context Modeling in Context-Aware (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
1. International Journal of Computer Applications Technology and Research
Volume 5–Issue 11, 687-692,2016,ISSN:-2319–8656
www.ijcat.com 687
Activity Context Modeling in Context-Aware
Environment
Samuel King Opoku
Computer Science Department,
Technical University of Kumasi
Kumasi, Ghana
Abstract: The explosion of mobile devices has fuelled the advancement of pervasive computing to provide personal assistance in this
information-driven world. Pervasive computing takes advantage of context-aware computing to track, use and adapt to contextual
information. The context that has attracted the attention of many researchers is the activity context. There are six major techniques that
are used to model activity context. These techniques are key-value, logic-based, ontology-based, object-oriented, mark-up schemes and
graphical. This paper analyses these techniques in detail by describing how each technique is implemented while reviewing their pros
and cons. The paper ends with a hybrid modeling method that fits heterogeneous environment while considering the entire of modeling
through data acquisition and utilization stages. The modeling stages of activity context are data sensation, data abstraction and
reasoning and planning. The work revealed that mark-up schemes and object-oriented are best applicable at the data sensation stage.
Key-value and object-oriented techniques fairly support data abstraction stage whereas the logic-based and ontology-based techniques
are the ideal techniques for reasoning and planning stage. In a distributed system, mark-up schemes are very useful in data
communication over a network and graphical technique should be used when saving context data into database.
Keywords: context; context modeling; context-aware computing; hybrid context model; mobile computing; pervasive computing
1. INTRODUCTION
With the proliferation of mobile devices, one looks to
pervasive computing to provide personal assistance in this
digital world. Pervasive computing refers to obtaining
available data any time at any place. One critical aspect of
pervasive computing is context-aware computing in which
applications are made to track, use and adapt to contextual
information. The definition of context by Dey and Abowd [1]
which is also adopted in this work is, “Any information that
can be used to characterize the situation of an entity. An entity
is a person, place or object that is considered relevant to the
interaction between a user and an application including the
user and application themselves”. The relatively important
contexts [2], [3] are location, time, identity and activity. The
first three contexts are obtained through well-known
mechanisms. The location context can be determined when
the device is outdoor through Global Positioning System
(GPS) [4] or indoor through radio frequency technologies like
Bluetooth [5]. Concerning the time of the day, the clock
system of a mobile device can be used in connection with
external authenticated system [6]. However, many techniques
are employed in tracking activity context. This paper reviews
the prominent techniques stating the pros and cons of the
various mechanisms. The paper ends with the best approach
for activity modeling through the various stages of data
acquisition and interpretation.
Context modeling is the process of identifying the appropriate
contextual information of interest and establishing
relationships and reactions among the pieces of contextual
information. Schmohl and Baumgarten in [7] suggested that
context modelling requires two phases: the first phase is to
determine the conceptual abstraction of real world
characteristics. The second phase is to map the concepts on a
context model that represent the information. To sense and
interpret contextual information, many components are
required. The core components are sensors, middleware,
context repository, context reasoning and communication
interface. Context sensor can be hardware or software that
fetches raw data for contextual information. Because the raw
data cannot be interpreted by high level languages [8], there is
the need for a middleware that refines the raw data from
sensors into the required data structures [9]. The context
repository stores the current context based on the data
structures generated by the middleware. Context reasoning
infers new context based on the current context information
obtained from the middleware. The context reasoning relies
on a set of rules [10] to infer the new context. Communication
interface is required in distributed system. Reasoning
mechanism may commit contextual updates into a network or
may request contextual information through a network. The
context reasoning after inference passes the message to the
context-aware application which uses the information to adapt
itself through the execution or termination of certain
programs. A conceptual view of the context utilization
mechanism is illustrated below:
Figure 1 Context Acquisition and Utilization Framework
There are six mechanisms or models that can be used to sense
and interpret a context within the various components stated
above. These are key-value, mark-up scheme, graphical,
object-oriented, logic-based and ontology-based models
2. International Journal of Computer Applications Technology and Research
Volume 5–Issue 11, 687-692,2016,ISSN:-2319–8656
www.ijcat.com 688
2. MODELING TECHNIQUES
The six modeling techniques that can be used to model
activity context are discussed in details below. To illustrate
how each model is implemented, the following case study is
model during the discussion of the various techniques. “A
computer science book (BK) is read by computer scientist
(CS). BK has Title, ISBN; Author, Barcode as its attributes.
The title and author are string data type whereas ISBN and
Barcode are integer data type. The author is usually computer
scientist who of course is a human. Humans are characterized
by name and age being string and integer data types
respectively.”
2.1 Key-Value Models
They require simple data structures to associate context
attributes with specific values of conceptual information [7].
Schilit et al in [11] used key-value pairs to model the context
by assigning the value of context information to an
environment variable in an application. From the case study
above, the characteristics of the entities – BK and CS – which
form the keywords for the modeling are:
BK: title, ISBN, author, barcode
CS: name, age
Since human and CS are the same in terms of characteristics,
human is ignored in the modeling. The characteristics that can
be used to identify BK are ISBN and barcode. Both features
are of integer data types, though barcode are sometimes
alphanumeric. This work restricts itself to only numeric
barcodes. It is therefore not applicable to alphanumeric
barcodes. There is other integer feature called age that belongs
to CS. However, the length of age should not be more than
three. Barcode has varying length but usually more than three.
Similarly, the length of ISBN is more than three. Hence, every
set of data consisting of numeric characters with length
greater than 3, is associated with BK. The other features of
BK are author and title. They cannot be used in our model
since author and title consist of set of characters similar to the
features of CS. Thus to avoid ambiguity, author and title are
ignored. Similarly, the name feature of CS is also ignored.
Regarding CS, the characteristics the can be used to identify
CA is the age. Thus if the data is integer with length less than
or equal to three, then CS is being referenced. The figure
below gives the key-value model for the above case study
Figure 2 Key-Value Sample Model
There is no relationship established between BK and CS as
discussed above. The characteristics provided should be
followed closely so that entities can be identified and
reasoning inferred. It can be deduced that key-value model is
flexible and easy to manage in a small system. However, the
model limits the amount of data. The model is also application
dependent and is not adaptive. There is no validation support
and relationship modeling.
2.2 Mark-up scheme models
They used hierarchical data structures based on mark-up tags
with attributes and comments. It is based on serialization of
Standard Generic Mark-up Language (SGML), the superclass
of all mark-up languages. Among the commonly used mark-
up schemes are eXensible Mark-up Language (XML) and
Resource Description Framework (RDF). XML is used to
package data or information and RDF is used for conceptual
description or modelling in web resources using a data
serialization format. The above case study is designed using
XML. XML file cannot contain multiple root elements. Thus
each root element – BK, CS and Human – are contained in
separate files. In the figure below, only section of BK is
implemented.
Figure 3 Mark-up Sample Model
Some researchers extended RDF capabilities to Composite
Capabilities/Preferences Profile (CC/PP) [12] and User Agent
Profile (UAProf) [13] which allow the definition and
preferences in context delivery. For instance, in [14] and [15],
the researchers designed Comprehensive Structured Context
Profile (CSCP) and CC/PP Context Extension respectively to
handle the limitation of CC/PP. The limitation of CC/PP is
that it allows specific values only. Another context modelling
approach which does not follow CC/PP is the pervasive
Profile Description Language (PPDL) [16]. This is an XML-
based language for designing interaction patterns on limited
scales. In order to reduce bandwidth consumption, XML can
be combined with JavaScript Object Notation (JSON) [17]
which reduces the transmission time of context-aware
applications. JSON is lightweight data exchange format that
allows computers to generate and parse data easily and faster.
Mark-up schemes are flexible and structured. There are
available tools for processing and very useful when data is
travelled along a network or communication link.
Unfortunately, mark-up schemes depend on the application
and information is hard to extract.
2.3 Graphical models
They represent contextual entities and their relationships
graphically using tools like Uniform Modelling Language
(UML) and Entity Relationships (ER) schemas. UML is very
3. International Journal of Computer Applications Technology and Research
Volume 5–Issue 11, 687-692,2016,ISSN:-2319–8656
www.ijcat.com 689
expressive since it uses directed graph to indicate the
relationships between concepts. However, it is difficult to
work with UML [18] due to its increasing complexity. As of
2016, there are more than fourteen different diagrams making
it virtually impossible for developers to recognize. ER on the
other hand describes conceptual pieces as entities and the
interactions as relationships. The major limitation of ER is the
lack of semantics between the entities due to limited
representation of relationships, data manipulation, constraints
and specifications between entities. The above case study is
modeled using ER. The entities in the case study are CS, BK
and Human
Figure 4 Sample Entity Relations Graphical Model
To handle the limitations of ER, extensions are provided by
researchers through the additions of extra features in context
modeling. Bauer in [19] used UML extensions to model air
traffic management system. Another type of graphical model
is the Object-Role Model (ORM) [20] which is a conceptual
level modelling method to handle the semantics of data and its
interrelationship among the data. ORM models concepts as
facts [21]. The work in [22] extended ORM by employing
contextual classification and description properties which
introduced history fact to cover time aspect of the context and
fact dependencies. Thus a change in one fact automatically
leads to a change in another fact. According to Mohan and
Singh in [21], formal semantics of ORM and Context
Modeling Language (CML) can be supplemented to provide
integration with other implementations. Graphical modeling
technique as its strength provides relationship modeling,
flexible implementation and it is very useful for data storage
and historic context store [23]. However, the limitations of
graphical modelling technique include its complexity to
retrieve information which requires obligatory configuration.
It does not support interoperability between heterogeneous
implementations.
2.4 Object-Oriented Models
They employ the use of object-oriented characteristics such as
encapsulation and reusability. Encapsulation hides the
implementation of objects whiles reusability allows models to
be reused. The Active Object Model of the GUIDE project in
[24] was based on object-oriented model. In object-oriented
model, entities are modeled using classes which are
implemented as objects. The attributes are implemented as
data fields or variable in object-oriented model whereas entity
behaviors are modeled as methods (in Java) or functions (in
C++). The above case study is modeled using Java in the
figure below.
Figure 5 Object-Oriented Sample Model
The object-oriented model provides relationship modeling.
The tools for processing are available and it allows every
integration whiles supporting data transformation over
network or communication link. It is however, limited by its
complexity to retrieve information.
2.5 Logic-based Models
They use rules and expressions to define a context. Logics are
used to define conditions for formulating expressions or facts.
It is best used at the reasoning section of a context-aware
application. The first logic based context modelling approach
occurred in 1993 by McCarthy [25] and refined in 1997 [26].
The refined work introduced abstract mathematical entities
complemented with useful properties in artificial intelligence
which allowed simple axioms to be used in common sense
phenomena. Akram and Surav in [27] tried to give theoretical
semantic model of natural language in a formal logic system.
Many researchers had focused on first order logic in their
implementations. Gray and Salber in [28] used it to represent
contextual propositions and relations. In [29], first order logic
was used in connection with Boolean algebra to design
middleware infrastructure called Gaia which allowed various
rules to describe context information. It was used in [30] to
describe context information properties and structure and the
kinds of operations they can perform. Another application is
to use logic based model with other modelling techniques. Gu
et al in [31] proposed Service-Oriented Context-Aware
Middleware (SOCAM) architecture for building context-
aware services. In their model, first-order predicate calculus
was used to model a context and used different modeling
techniques, ontology based model written in Web Ontology
Language (OWL) as a collection of RDF triples, to describe
the context predicate. Using the first order logic, the following
model can be obtained from the above case study:
, (is-isbn(x) ˅ is-barcode(x) => BK (x))
, ( (is-name(x) ˄ is-age(y) => CS (x,y)))
4. International Journal of Computer Applications Technology and Research
Volume 5–Issue 11, 687-692,2016,ISSN:-2319–8656
www.ijcat.com 690
, (is-CS(x) => is-Human (x))
, (is-BK(x) => ¬ is-CS (x))
, (is-BK(x) => ¬ is-Human (x))
, ( ((is-author(x) ˄ is-title(x)) ˅ ¬ is-age (y)) =>
BK(x,y))).
The above first order logic can also be described using
fuzzy logic. The figure below illustrates the fuzzy logic
representation.
Figure 6 Fuzzy Logic Sample Model
The strengths of logic-based model are that it generates high-
level context based on low-level context. It is simple to use
and simple to model. Its limitation is the complication of its
applicability. It is also difficult to maintain due to partial
validation
2.6 Ontology-based Models
Ontologies are used to represent concepts and their
interrelationships. This model enables contextual knowledge
sharing and reusability [7]. Ontologies are the most expressive
context representation models [8] that support dynamic
aspects of context awareness. However, they require ontology
engines which have high requirements on resources producing
negative performance impact on local context processing
where resource-constrained devices are employed. Among the
prominent proposal on ontology based modeling techniques
are SOUPA [33] for pervasive environment and CONON [34]
for smart home environment. The figure below illustrates the
ontology model of the above case study.
Figure 5: Ontology Model Sample
The ellipses define the basic classes and every arrow defines a
relation between these entities. The rectangles are added to
show data type values for completion.
OWL-DL is the choice for model context [32] or its
variations since it is supported by a number of reasoning
services. Web Ontology Language – Description Logic
(OWL-DL) allows definition of classes, individuals,
characteristics of individuals and relations between various
individuals. The figure below illustrates the OWL – DL
version for some part of the model.
Figure 6: OWL-DL Model Sample
Generally, ontology based models support semantic
reasoning, provide an easier representation of context and
provide support by standardization. It is however, complex to
retrieved data. Also, it can be deduced from the OWL-DL
model in Figure 6 that it is inadequate in defining complex
context descriptions and thus affects context ontological
reasoning.
3. BEST CHOICE TECHNIQUE
What is the best technique(s) to use to model activity context?
The works of [21] and [35] introduced hybrid approach by
combining different modelling techniques to improve
performance. Perera et al in [36] surveyed high-level context
modelling techniques and concluded that diversification of
modelling techniques is the best way to provide efficient
results which will lessen each other’s weaknesses. Finally,
[23] discussed the context modelling techniques and used
logic based technique to model spatiotemporal context. They
finally concluded that no modelling technique is ideal to be
used in a standalone manner. From the above discussion, it is
therefore laudable to combine modeling techniques to produce
efficient results. Thus the best technique from context
acquisition to utilization is hybrid
There are many stages involved in sensing and interpreting
context. At each stage, it is better to study the strengths and
limitations of each modelling technique and decide the better
technique. For example, at the middleware layer, if the
abstracted context is to be sent via network, it will not be
appropriate to use logic-based modeling, graphical modeling
and key-value modeling techniques. However, mark-up and
object-based modelling techniques can be used depending on
hardware, software and architectural heterogeneity and
interoperability between the communicating devices.
Similarly, it will be ideal to model context reasoning using the
logic-based modeling technique when the application chooses
to use the usual if-then rule set mechanisms. Hence the
dynamic behaviour of the context-aware system cannot be
overlooked when choosing context modeling techniques. The
5. International Journal of Computer Applications Technology and Research
Volume 5–Issue 11, 687-692,2016,ISSN:-2319–8656
www.ijcat.com 691
figure below illustrates the appropriate modeling techniques
that can be used at the various modeling stages from which
the desired method can be selected.
Figure 7 Modeling Implementation of Activity Context Life
4. REFERENCES
[1] Dey, A. K., and Abowd, G. D. Towards a better
understanding of context and context-awareness.
Workshop on the What, Who, Where and How of
Context-Awareness, affiliated with the 2000 ACM
Conference on Human Factors in Computing Systems
(CHI 2000), 2000
[2] Dey, A. K. Context-aware computing. Ubiquitous
Computing Fundamentals, 321-352, 2010
[3] Castillejo, E., Almeida, A. & Lopez-de-Ipina, D.
Modelling users, context and devices for adaptive user
interface systems. International journal of pervasive
computing and communications, 10(1), 69-91, 2014
[4] Bajaj, R., Ranaweera S. L., & Agrawal, D. P., GPS:
Location-Tracking Technology. IEEE 35(4), 92–94,
2002
[5] Opoku, S. K., An indoor tracking system based on
Bluetooth technology. Cyber Journals: Multidisciplinary
Journals in Science and Technology, Journal of Selected
Areas in Telecommunications (JSAT), 2(12), 1-8, 2011
[6] Opoku, S. K., An automated biometric attendance
management system with dual authentication mechanism
based on Bluetooth and NFC technologies. International
Journal of Computer Science and Mobile Computing
2(3), 18-25, 2013
[7] Schmohl, R. & Baumgarten, U., Context-aware
computing: a survey preparing for generalized approach.
Proceedings of the international multi-conference of
engineers and computer scientists, UMECS, 1(3), 19-21,
2008
[8] Strang, T. & Linnhoff-Popien, C (2004). A context
modeling survey. In Proceedings of the Workshop on
Advanced Context Modelling, Reasoning and
Management associated with the 6th International
Conference on Ubiquitous Computing (UbiComp),
Nottingham.
[9] Gellersen, H. W., Schmidt, A. & Beigl, M., Multi-sensor
context-awareness in mobile devices and smart artifacts.
Mobile Networks Applications, 7(5):341–351, 2002
[10] Jacob, C., Linner, D, Radusch, I. & Steglich, S. (2007).
Loosely coupled and context-aware service provision
incorporating the quality of rules. In ICOMP 07:
Proceedings of the 2007International Conference on
Internet Computing. CSREA Press, 2007
[11] Schilit, B, Adams, N., R. Want, et al. (1994). Context-
aware Computing Applications, Xerox Corp., Palo Alto
Research Center.
[12] W3C. Composite Capabilities / Preferences Profile
(CC/PP). http://www.w3.org/Mo-bile/CCPP.
[13] Wapforum. User Agent Profile (UAProf).
http://www.wapforum.org.
[14] Held, A., Buchholz, S. & Sachill, A. (2002) Modeling of
context information for pervasive computing
applications, In Proceedings of SCI 2002/ISAS 2002
[15] Indulska, J. & Robinsona, R. (2003). Experiences in
using cc/pp in context-aware systems, In LNCS 2574:
Proceedings of the 4th International Conference on
Mobile Data Management (MDM2003)
(Melbourne/Australia, January 2003), M.S. Chen,P. K.
Chrysanthis, M. Sloman, and A. Zaslavsky, Eds.,Lecture
Notes in Computer Science (LNCS), Springer, pp. 247–
261.
[16] Chtcherbina, E. & Franz, M. (2003). Peer-to-peer
coordination framework (p2pc): Enabler of mobile ad-
hoc networking for medicine, business, and
entertainment, In Proceedings of International
Conference on Advances in Infrastructure for Electronic
Business, Education, Science, Medicine, and Mobile
Technologies on the Internet (SSGRR2003w)
(L’Aquila/Italy, January 2003).
[17] Erfianto B, Mahmood, A. K. & Rahman, A. S. A. (2007,
December 11-12). Modelling context and exchange
format for context-aware computing. Proceedings of 5th
student conference on research and development.
Malaysia
[18] Henricksen, K., Indulska, J., Modeling Context
Information in Pervasive Computing Systems. 167–180,
2002
[19] Bauer, J. (2003, March). Identification and Modeling of
Contexts for Different Information Scenarios in Air
Traffic. Diplomarbeit
[20] Halpin, T. A., (2001). Information Modeling and
Relational Databases: From Conceptual Analysis to
Logical Design, Morgan Kaufman Publishers, San
Francisco.
[21] Mohan, P. & Singh, M. (2013). Formal models for
context-aware computing. International Journal of
Computer Applications Technology and Research
(IJCATR), 2(1), 53-58
[22] Henricksen, K., Indulska, J., and Rakotonirainy, A.
(2003). Generating Context Management Infrastructure
from High-Level Context Models‖, In Industrial Track
Proceedings of the 4th International Conference on
Mobile Data Management (MDM2003)
(Melbourne/Australia, January 2003), pp. 1–6.
[23] Ameyed, D., Miraouli, M. & Tadj, C. (2016).
Spatiotemoral context modelling in pervasive context-
aware computing environment: A logic perspective.
6. International Journal of Computer Applications Technology and Research
Volume 5–Issue 11, 687-692,2016,ISSN:-2319–8656
www.ijcat.com 692
International Journal of Advanced Computer Science and
Applications, 7(4), 407-414
[24] Cheverst, K., Mitchell, K. & Davies, N. (1999). Design
of an object model for a context sensitive tourist GUIDE,
Computers and Graphics 23(6), 883–891
[25] McCarthy, J. (1993). Notes on formalizing contexts, In
Proceedings of the Thirteenth International Joint
Conference on Artificial Intelligence (San Mateo,
California, 1993), R. Bajcsy, Ed., Morgan Kaufmann, pp.
555–560.
[26] Mccarthy, J. & Buva. (1997). Formalizing context. In
Working Papers of the AAAI Fall Symposium on
Context in Knowledge Representation and Natural
Language (Menlo Park, California, 1997), American
Association for Artificial Intelligence, pp. 99–135.
[27] Akram V. & Surav, M. (1997). The use of situation
theory in context modelling. Computational intelligence,
13, 427-438
[28] Gray, P. & Salber, D. (2001). Modelling and using
sensed context information in the design of interactive
applications, in Engineering for Human-Computer
Interaction, ed: Springer, 317-335.
[29] Roman, M., Hess, C., Cerqueira, R., Ranganathan, A.,
Campbell, R. H., & Nahrstedt, k. (2002). A middleware
infrastructure for active space. IEEE pervasive
computing, 1(4), 74-83
[30] Ranganathan, A. & Campbell, R. H. (2003). An
infrastructure for context-awareness based on first order
logic. Personal and Ubiquitous Computing, 7, 353-364
[31] Gu, T., Pung, H. K. & Zhang, D. Q. (2005). A service-
oriented middleware for building context-aware services.
Journal of Network and computer applications, 28, 1-18
[32] Horrocks, I., Patel-Schneider, P.F., F. van Harmelen,
(2003). From SHIQ and RDF to OWL: The making of a
web ontology language, Journal of Web Semantics, 1 (1),
7–26.
[33] Chen, H., Perich, F., Finin, T. W., Joshi, A. (2004).
SOUPA: Standard Ontology for Ubiquitous and
Pervasive Applications, in: 1st Annual International
Conference on Mobile and Ubiquitous Systems
(MobiQuitous 2004), IEEE Computer Society
[34] Zhang, D., Gu, T., Wang, X. (2005). Enabling Context-
aware Smart Home with Semantic Technology,
International Journal of Human-friendly Welfare Robotic
Systems, 6 (4), 12–20
[35] Bettini, C., Brdiczka, O., Henricksen, K., Indulska, J.,
Nicklas, D., Ranganathan, A. et al. (2010). A survey of
context modelling and reasoning techniques. Pervasive
and Mobile Computing. 6, 161-180
[36] Perera, C., Zaslavsky, A., Christen, P. and
Georgakopoulos, D. (2014). Context aware computing
for the internet of things: A survey. Communications
Surveys & Tutorials, IEEE, 16, 414-454