Context-aware services are attracting attention of world as the use of web services are rapidly growing. We designed an architecture of context-aware semantic web which provides on demand flexibility and scalability in extracting and mining the research papers from well-known digital libraries i.e. ACM, IEEE and SpringerLink. This paper proposes a context-aware administrations system, which supports programmed revelation and incorporation of setting dependent on Semantic Web administrations. This work has been done using the python programming language with a dedicated library for the semantic web analysis named as “Cubic-Web” on any defined dataset, in our case as we have used a dataset for extracting and studying several publications to measure the impact of context aware semantic web application on the results. We have found the average recall and averge accuracy for all the context aware research journals in our research work. Moreover, as this study is limited journal documents, other future studies can be approached by examining different types of publications using this advance research. An efficient system has been designed considering the parameters of research article meta-data to find out the papers from the web using semantic web technology. Parameters like year of publication, type of publication, number of contributors, evaluation methods and analysis method used in publication. All this data has been extracted using the designed context-aware semantic web technology.
IRJET-Computational model for the processing of documents and support to the ...IRJET Journal
This document proposes a computational model for processing documents and supporting decision making in information retrieval systems. The model includes five main components: 1) a tracking and indexing component to crawl the web and store document metadata, 2) an information processing component to categorize documents and define user profiles, 3) a decision support component to analyze stored information and generate statistical reports, 4) a display component to provide search interfaces and visualization tools, and 5) specialized roles to administer the system. The goal of the model is to provide a framework for developing large-scale search engines.
Odam an optimized distributed association rule mining algorithm (synopsis)Mumbai Academisc
This document proposes ODAM, an optimized distributed association rule mining algorithm. It aims to discover rules based on higher-order associations between items in distributed textual documents that are neither vertically nor horizontally distributed, but rather a hybrid of the two. Modern organizations have geographically distributed data stored locally at each site, making centralized data mining infeasible due to high communication costs. Distributed data mining emerged to address this challenge. ODAM reduces communication costs compared to previous distributed ARM algorithms by mining patterns across distributed databases without requiring data consolidation.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
IRJET-Model for semantic processing in information retrieval systemsIRJET Journal
This document proposes a model for semantic information retrieval that improves upon traditional keyword matching approaches. It involves three main components:
1. A crawling and indexing component that identifies websites and pages, extracts metadata, and generates a knowledge graph through semantic annotation.
2. A processing component that analyzes user queries and profiles to understand search intent, calculates semantic similarity between queries and indexed documents, and determines result relevance.
3. A presentation component that displays search results to users through both simple and advanced search interfaces, prioritizing the most relevant information based on the above processing.
The model is intended to address deficiencies in current Cuban web search by better understanding natural language queries and the contextual meaning of information through semantic technologies
An adaptive clustering and classification algorithm for Twitter data streamin...TELKOMNIKA JOURNAL
On-going big data from social networks sites alike Twitter or Facebook has been an entrancing
hotspot for investigation by researchers in current decades as a result of various aspects including
up-to-date-ness, accessibility and popularity; however anyway there may be a trade off in accuracy.
Moreover, clustering of twitter data has caught the attention of researchers. As such, an algorithm which
can cluster data within a lesser computational time, especially for data streaming is needed. The presented
adaptive clustering and classification algorithm is used for data streaming in Apache spark to overcome
the existing problems is processed in two phases. In the first phase, the input pre-processed twitter data is
viably clustered utilizing an Improved Fuzzy C-means clustering and the proposed clustering is additionally
improved by an Adaptive Particle swarm optimization (PSO) algorithm. Further the clustered data
streaming is assessed utilizing spark engine. In the second phase, the input pre-processed Higgs data is
classified utilizing the modified support vector machine (MSVM) classifier with grid search optimization.
At long last the optimized information is assessed in spark engine and the assessed esteem is utilized to
discover an accomplished confusion matrix. The proposed work is utilizing Twitter dataset and Higgs
dataset for the data streaming in Apache Spark. The computational examinations exhibit the superiority of
presented approach comparing with the existing methods in terms of precision, recall, F-score,
convergence, ROC curve and accuracy.
Presentation on Web Mining Or Data Mining. Data mining is the process of discovering insightful, interesting, and novel patterns, as well as descriptive, understandable and predictive models from large-scale data. data analysis aims to explore the numeric and categorical attributes of the data individually or jointly to extract key characteristics of the data sample via statistics that give information about the centrality, dispersion, and so on.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
This document summarizes a research paper on using clustering approaches to improve the discovery of semantic web services. It begins by defining semantic web services and semantic similarity measures. It then discusses using clustering to eliminate irrelevant services from a collection before applying semantic algorithms. Specifically, it proposes a clustering probabilistic semantic approach (CPLSA) that filters services based on compatibility with a query before clustering the remaining services into semantically related groups using probabilistic latent semantic analysis (PLSA). The document concludes by discussing applications of approximate semantics and challenges in scaling semantic algorithms.
IRJET-Computational model for the processing of documents and support to the ...IRJET Journal
This document proposes a computational model for processing documents and supporting decision making in information retrieval systems. The model includes five main components: 1) a tracking and indexing component to crawl the web and store document metadata, 2) an information processing component to categorize documents and define user profiles, 3) a decision support component to analyze stored information and generate statistical reports, 4) a display component to provide search interfaces and visualization tools, and 5) specialized roles to administer the system. The goal of the model is to provide a framework for developing large-scale search engines.
Odam an optimized distributed association rule mining algorithm (synopsis)Mumbai Academisc
This document proposes ODAM, an optimized distributed association rule mining algorithm. It aims to discover rules based on higher-order associations between items in distributed textual documents that are neither vertically nor horizontally distributed, but rather a hybrid of the two. Modern organizations have geographically distributed data stored locally at each site, making centralized data mining infeasible due to high communication costs. Distributed data mining emerged to address this challenge. ODAM reduces communication costs compared to previous distributed ARM algorithms by mining patterns across distributed databases without requiring data consolidation.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
IRJET-Model for semantic processing in information retrieval systemsIRJET Journal
This document proposes a model for semantic information retrieval that improves upon traditional keyword matching approaches. It involves three main components:
1. A crawling and indexing component that identifies websites and pages, extracts metadata, and generates a knowledge graph through semantic annotation.
2. A processing component that analyzes user queries and profiles to understand search intent, calculates semantic similarity between queries and indexed documents, and determines result relevance.
3. A presentation component that displays search results to users through both simple and advanced search interfaces, prioritizing the most relevant information based on the above processing.
The model is intended to address deficiencies in current Cuban web search by better understanding natural language queries and the contextual meaning of information through semantic technologies
An adaptive clustering and classification algorithm for Twitter data streamin...TELKOMNIKA JOURNAL
On-going big data from social networks sites alike Twitter or Facebook has been an entrancing
hotspot for investigation by researchers in current decades as a result of various aspects including
up-to-date-ness, accessibility and popularity; however anyway there may be a trade off in accuracy.
Moreover, clustering of twitter data has caught the attention of researchers. As such, an algorithm which
can cluster data within a lesser computational time, especially for data streaming is needed. The presented
adaptive clustering and classification algorithm is used for data streaming in Apache spark to overcome
the existing problems is processed in two phases. In the first phase, the input pre-processed twitter data is
viably clustered utilizing an Improved Fuzzy C-means clustering and the proposed clustering is additionally
improved by an Adaptive Particle swarm optimization (PSO) algorithm. Further the clustered data
streaming is assessed utilizing spark engine. In the second phase, the input pre-processed Higgs data is
classified utilizing the modified support vector machine (MSVM) classifier with grid search optimization.
At long last the optimized information is assessed in spark engine and the assessed esteem is utilized to
discover an accomplished confusion matrix. The proposed work is utilizing Twitter dataset and Higgs
dataset for the data streaming in Apache Spark. The computational examinations exhibit the superiority of
presented approach comparing with the existing methods in terms of precision, recall, F-score,
convergence, ROC curve and accuracy.
Presentation on Web Mining Or Data Mining. Data mining is the process of discovering insightful, interesting, and novel patterns, as well as descriptive, understandable and predictive models from large-scale data. data analysis aims to explore the numeric and categorical attributes of the data individually or jointly to extract key characteristics of the data sample via statistics that give information about the centrality, dispersion, and so on.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
This document summarizes a research paper on using clustering approaches to improve the discovery of semantic web services. It begins by defining semantic web services and semantic similarity measures. It then discusses using clustering to eliminate irrelevant services from a collection before applying semantic algorithms. Specifically, it proposes a clustering probabilistic semantic approach (CPLSA) that filters services based on compatibility with a query before clustering the remaining services into semantically related groups using probabilistic latent semantic analysis (PLSA). The document concludes by discussing applications of approximate semantics and challenges in scaling semantic algorithms.
Web mining involves analyzing textual and link structure data from the world wide web to discover useful information. It deals with petabytes of data generated daily and needs to adapt to evolving usage patterns in real-time. Topics related to web mining include web graph analysis, power laws, structured data extraction, web advertising, user analysis, social networks, and blog analysis. The future will involve very large-scale data mining of datasets too big to fit in memory or even on a single disk.
A Generic Model for Student Data Analytic Web Service (SDAWS)Editor IJCATR
Any university management system accumulates a cartload of data and analytics can be applied on it to gather useful
information to aid the academic decision making process. This paper is a novel attempt to demonstrate the significance of a data
analytic web service in the education domain. This can be integrated with the University Management System or any other application
of the university easily. Analytics as a web service offers much benefits over the traditional analysis methods. The web service can be
hosted on a web server and accessed over the internet or on to the private cloud of the campus. The data from various courses from
different departments can be uploaded and analyzed easily. In this paper we design a web service framework to be used in educational
data mining that provide analysis as a service.
Time-Ordered Collaborative Filtering for News RecommendationIRJET Journal
This document presents a novel news recommendation system called Time-Ordered Collaborative Filtering for News Recommendation. The system filters and recommends news to users based on their choices and preferences using collaborative filtering and content-based filtering. It extracts news data from popular online newspapers and stores it in a database. The recommendation framework then preprocesses the news data and applies algorithms like calculating sparse matrices to rank news and recommend the top news to each user based on their interests.
This document discusses web mining and its various types. Web mining involves using data mining techniques to discover useful information from web documents and usage patterns. It can involve content mining of text, images, video and audio to extract useful information. It also includes structure mining, which analyzes the hyperlink structure between documents and within documents. Additionally, web usage mining analyzes log files from web servers and applications to discover interesting usage patterns. The document outlines the differences between traditional data mining and web mining. It provides examples of applications of web mining such as information retrieval, network management and e-commerce.
This document proposes a new technique to enhance the learning capabilities and reduce the computation intensity of a competitive learning multi-layered neural network using the K-means clustering algorithm. The proposed model uses a multi-layered network architecture with backpropagation learning to analyze web log data. Data preprocessing steps like cleaning, user identification, and transaction identification are applied to prepare the enterprise proxy log data for analysis. The proposed framework aims to discover useful patterns from web log data through a combination of K-means clustering and a feedforward neural network.
Web usage Mining Based on Request Dependency GraphIRJET Journal
This document discusses using request dependency graphs (RDGs) to model the dependency relationships between HTTP requests for web usage mining. RDGs can improve data quality and enhance network and web server performance. The authors evaluated their approach using a large real-world web access log and found that RDGs are a useful tool for web usage mining by extracting patterns from user access behaviors and decomposing websites.
A HEALTH RESEARCH COLLABORATION CLOUD ARCHITECTUREijccsa
Cloud computing platforms used for research collaborations within public health domains are limited in terms of how service components are organized and provisioned. This challenge is intensified by platform level challenges of transparency, confidentiality, privacy and trust. Addressing these collaboration issues will necessitate that components are reorganized. There is a need for secure and efficient approaches of reorganizing the service components, with trust to support collaboration related requirements. Through iterative design, a reliable and trust-aware re-organization of cloud components – the Collaboration cloud architecture is achieved. We utilize SOA, Privacy-By-Design principles and insights from blockchain to enforce trust. We illustrate its potential with multi-layer security process flow based on server-to-client tokens, Role-based Access Control and the traditional authentication – username and password with assurance for privacy and trust. The architecture shows promise towards data governance and the overall management of internal or external data flows.
A detail survey of page re ranking various web features and techniquesijctet
This document discusses techniques for page re-ranking on websites based on user behavior analysis. It describes how web usage mining involves analyzing web server logs to extract patterns in user behavior. Common techniques discussed for page re-ranking include Markov models, data mining approaches like clustering and association rule mining, and analyzing linked web page structures. The goal is to better understand user interests and predict future page access to improve information retrieval and optimize website design.
The document discusses using Web Oriented Architecture (WOA) principles and technologies to improve transparency, collaboration, and information sharing through publishing and linking government data on the Web. It describes exposing raw data and semantically enriched structured data as public records. Technologies that enable interoperability across disparate data sources for large-scale data federation are also described. Finally, the applicability of the proposed solution architecture to existing frameworks is discussed.
IDENTIFYING IMPORTANT FEATURES OF USERS TO IMPROVE PAGE RANKING ALGORITHMSIJwest
Web is a wide, various and dynamic environment in which different users publish their documents. Webmining is one of data mining applications in which web patterns are explored. Studies on web mining can be categorized into three classes: application mining, content mining and structure mining. Today, internet has found an increasing significance. Search engines are considered as an important tool to respond users’ interactions. Among algorithms which is used to find pages desired by users is page rank algorithm which ranks pages based on users’ interests. However, as being the most widely used algorithm by search engines including Google, this algorithm has proved its eligibility compared to similar algorithm, but considering growth speed of Internet and increase in using this technology, improving performance of this algorithm is considered as one of the web mining necessities. Current study emphasizes on Ant Colony algorithm and marks most visited links based on higher amount of pheromone. Results of the proposed algorithm indicate high accuracy of this method compared to previous methods. Ant Colony Algorithm as one of the swarm intelligence algorithms inspired by social behavior of ants can be effective in modeling social behavior of web users. In addition, application mining and structure mining techniques can be used simultaneously to improve page ranking performance.
In this world of information technology, everyone has the tendency to do business electronically. Today
lot of businesses are happening on World Wide Web (WWW), it is very important for the website owner to
provide a better platform to attract more customers for their site. Providing information in a better way is
the solution to bring more customers or users. Customer is the end-user, who accessing the information
in a way it yields some credit to the web site owners. In this paper we define web mining and present a
method to utilize web mining in a better way to know the users and website behaviour which in turn
enhance the web site information to attract more users. This paper also presents an overview of the
various researches done on pattern extraction, web content mining and how it can be taken as a catalyst
for E-business.
An agent-based collection development system is proposed that uses software agents to perform tasks related to selecting, acquiring, recording, and disseminating information for collection development. The system would use various agents like a Profile Agent to analyze faculty webpages and develop research interest profiles, a Citation Agent to identify publications and topics, a Search Agent to search resources and identify relevant works, and other agents to check the catalog, monitor usage, process interlibrary loans, and handle acquisitions. The agent system could automate many routine collection development tasks currently performed by librarians and help libraries better support faculty research needs.
MULTIFACTOR NAÏVE BAYES CLASSIFICATION FOR THE SLOW LEARNER PREDICTION OVER M...ijcsa
The high school students must be observed for their slow learning or quick learning abilities to provide
them with the best education practices. Such analysis can be perfectly performed over the student
performance data. The high school student data has been obtained from the schools from the various
regions in Punjab, a pivotal state of India. The complete student data and the selective data of almost 1300
students obtained from one school in the regions has been undergone the test using the proposed model in
this paper. The proposed model is based upon the naïve bayes classification model for the data
classification using the multi-factor features obtained from the input dataset. The subject groups have been
divided into the two primary groups: difficult and normal. The classification algorithm has been applied
individually over data grouped in the various subject groups. Both of the early stage classification events
have produced the almost similar results, whereas the results obtained from the classification events over
the averaging factors and the floating factors told the different story than the early stage classification. The
proposed model results have shown that the deep analysis of the data tells the in-depth facts from the input
data. The proposed model can be considered as the effectiv
The International Journal of Database Management Systems is a bi-monthly peer-reviewed journal that publishes articles on database management systems and their applications. The goal is to bring together researchers and practitioners to share new results and establish collaborations. Topics of interest include data integration, privacy, mining, architectures, and more. Authors are invited to submit original papers by the deadline of April 28, 2019.
Semantic Web concepts used in Web 3.0 applicationsIRJET Journal
This document discusses how semantic web concepts can be used in applications for Web 3.0. It provides examples of how semantic web could be applied in areas like medical sciences, search engines, and e-learning. Specifically, it describes how semantic web could help integrate medical data from different sources, enable more accurate medical diagnosis and treatment recommendations based on patient history. It also discusses how a semantic search engine like Swoogle works by tagging web pages with metadata to better understand context and return more relevant search results. Finally, it touches on how a semantic web architecture could enable more sophisticated e-learning systems by linking educational resources.
IRJET- A Workflow Management System for Scalable Data Mining on CloudsIRJET Journal
1. The document discusses a workflow management system for scalable data mining on clouds. It proposes using MapReduce and Hadoop frameworks to parallelize k-means clustering of large datasets on cloud infrastructure.
2. The system aims to improve efficiency, security, and transmission speed over existing cloud systems by generating hash codes for files before classification and storage on cloud. It uses deduplication to avoid redundant uploads.
3. The document outlines the system implementation including user modules for registration, login, profile editing, training data upload, file upload and download while avoiding redundancy, and changing/logging out of passwords. It also discusses testing the system functionality using unit testing libraries.
1. The document describes a search engine scraper that extracts data from websites, summarizes the extracted information, and converts it into a relevant result for users.
2. The search engine scraper works in three stages: extraction of data from website content, summarization of the extracted data using natural language processing techniques, and conversion of the summarized data into a meaningful format for users.
3. The summarization stage uses natural language toolkit processing libraries to determine sentence similarity, assign weights to sentences, and select sentences with higher ranks to include in the summary.
Service Level Comparison for Online Shopping using Data MiningIIRindia
The term knowledge discovery in databases (KDD) is the analysis step of data mining. The data mining goal is to extract the knowledge and patterns from large data sets, not the data extraction itself. Big-Data Computing is a critical challenge for the ICT industry. Engineers and researchers are dealing with the cloud computing paradigm of petabyte data sets. Thus the demand for building a service stack to distribute, manage and process massive data sets has risen drastically. We investigate the problem for a single source node to broadcast the big chunk of data sets to a set of nodes to minimize the maximum completion time. These nodes may locate in the same datacenter or across geo-distributed data centers. The Big-data broadcasting problem is modeled into a LockStep Broadcast Tree (LSBT) problem. And the main idea of the LSBT is defining a basic unit of upload bandwidth, r, a node with capacity c broadcasts data to a set of [c=r] children at the rate r. Note that r is a parameter to be optimized as part of the LSBT problem. The broadcast data are further divided into m chunks. In a pipeline manner, these m chunks can then be broadcast down the LSBT. In a homogeneous network environment in which each node has the same upload capacity c, the optimal uplink rate r, of LSBT is either c=2 or 3, whichever gives the smaller maximum completion time. For heterogeneous environments, an O(nlog2n) algorithm is presented to select an optimal uplink rate r, and to construct an optimal LSBT. With lower computational complexity and low maximum completion time, the numerical results shows better performance.The methodology includes Various Web applications Building and Broadcasting followed by the Gateway Application and Batch Processing over the TSV Data after which the Web Crawling for Resources and MapReduce process takes place and finally Picking Products from Recommendations and Purchasing it.
Resource Consideration in Internet of Things A Perspective Viewijtsrd
The ubiquitous computing and its applications at different levels of abstraction are possible mainly by virtualization. Most of its applications are becoming pervasive with each passing day and with the growing trend of embedding computational and networking capabilities in everyday objects of use by a common man. Virtualization provides many opportunities for research in IoT since most of the IoT applications are resource constrained. Therefore, there is a need for an approach that shall manage the resources of the IoT ecosystem. Virtualization is one such approach that can play an important role in maximizing resource utilization and managing the resources of IoT applications. This paper presents a survey of Virtualization and the Internet of Things. The paper also discusses the role of virtualization in IoT resource management. Rishikesh Sahani | Prof. Avinash Sharma ""Resource Consideration in Internet of Things: A Perspective View"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23694.pdf
Paper URL: https://www.ijtsrd.com/computer-science/world-wide-web/23694/resource-consideration-in-internet-of-things-a-perspective-view/rishikesh-sahani
Review on an automatic extraction of educational digital objects and metadata...IRJET Journal
This document reviews systems for automatically extracting educational digital objects (EDOs) and metadata from institutional websites. It discusses existing systems like Agathe, Crossmarc, and CiteSeerX that extract information from documents but not linked web pages. The proposed system would crawl a website, extract and classify EDOs into audio, video and text, and store them in a database along with automatically extracted metadata like title, category, author. This aims to assist repositories in identifying documents that could be uploaded along with their metadata.
Trend-Based Networking Driven by Big Data Telemetry for Sdn and Traditional N...josephjonse
This summarizes a research paper on developing an intelligent system that leverages big data telemetry analysis to enable trend-based networking decisions. The system collects data from traditional and SDN networks using SNMP and OpenFlow. Logstash filters the data and sends it to PNDA's Kafka interface. A Jupyter notebook streams the data to OpenTSDB. Benchmarking identifies trends, and the system takes automated action like load balancing across the networks when trends are detected. A GUI provides centralized management. The system demonstrates using big data analytics to monitor networks and make proactive, automated decisions based on observed trends.
TREND-BASED NETWORKING DRIVEN BY BIG DATA TELEMETRY FOR SDN AND TRADITIONAL N...ijngnjournal
Organizations face a challenge of accurately analyzing network data and providing automated action
based on the observed trend. This trend-based analytics is beneficial to minimize the downtime and
improve the performance of the network services, but organizations use different network management
tools to understand and visualize the network traffic with limited abilities to dynamically optimize the
network. This research focuses on the development of an intelligent system that leverages big data
telemetry analysis in Platform for Network Data Analytics (PNDA) to enable comprehensive trendbased networking decisions. The results include a graphical user interface (GUI) done via a web
application for effortless management of all subsystems, and the system and application developed in
this research demonstrate the true potential for a scalable system capable of effectively benchmarking
the network to set the expected behavior for comparison and trend analysis. Moreover, this research
provides a proof of concept of how trend analysis results are actioned in both a traditional network and
a software-defined network (SDN) to achieve dynamic, automated load balancing.
Web mining involves analyzing textual and link structure data from the world wide web to discover useful information. It deals with petabytes of data generated daily and needs to adapt to evolving usage patterns in real-time. Topics related to web mining include web graph analysis, power laws, structured data extraction, web advertising, user analysis, social networks, and blog analysis. The future will involve very large-scale data mining of datasets too big to fit in memory or even on a single disk.
A Generic Model for Student Data Analytic Web Service (SDAWS)Editor IJCATR
Any university management system accumulates a cartload of data and analytics can be applied on it to gather useful
information to aid the academic decision making process. This paper is a novel attempt to demonstrate the significance of a data
analytic web service in the education domain. This can be integrated with the University Management System or any other application
of the university easily. Analytics as a web service offers much benefits over the traditional analysis methods. The web service can be
hosted on a web server and accessed over the internet or on to the private cloud of the campus. The data from various courses from
different departments can be uploaded and analyzed easily. In this paper we design a web service framework to be used in educational
data mining that provide analysis as a service.
Time-Ordered Collaborative Filtering for News RecommendationIRJET Journal
This document presents a novel news recommendation system called Time-Ordered Collaborative Filtering for News Recommendation. The system filters and recommends news to users based on their choices and preferences using collaborative filtering and content-based filtering. It extracts news data from popular online newspapers and stores it in a database. The recommendation framework then preprocesses the news data and applies algorithms like calculating sparse matrices to rank news and recommend the top news to each user based on their interests.
This document discusses web mining and its various types. Web mining involves using data mining techniques to discover useful information from web documents and usage patterns. It can involve content mining of text, images, video and audio to extract useful information. It also includes structure mining, which analyzes the hyperlink structure between documents and within documents. Additionally, web usage mining analyzes log files from web servers and applications to discover interesting usage patterns. The document outlines the differences between traditional data mining and web mining. It provides examples of applications of web mining such as information retrieval, network management and e-commerce.
This document proposes a new technique to enhance the learning capabilities and reduce the computation intensity of a competitive learning multi-layered neural network using the K-means clustering algorithm. The proposed model uses a multi-layered network architecture with backpropagation learning to analyze web log data. Data preprocessing steps like cleaning, user identification, and transaction identification are applied to prepare the enterprise proxy log data for analysis. The proposed framework aims to discover useful patterns from web log data through a combination of K-means clustering and a feedforward neural network.
Web usage Mining Based on Request Dependency GraphIRJET Journal
This document discusses using request dependency graphs (RDGs) to model the dependency relationships between HTTP requests for web usage mining. RDGs can improve data quality and enhance network and web server performance. The authors evaluated their approach using a large real-world web access log and found that RDGs are a useful tool for web usage mining by extracting patterns from user access behaviors and decomposing websites.
A HEALTH RESEARCH COLLABORATION CLOUD ARCHITECTUREijccsa
Cloud computing platforms used for research collaborations within public health domains are limited in terms of how service components are organized and provisioned. This challenge is intensified by platform level challenges of transparency, confidentiality, privacy and trust. Addressing these collaboration issues will necessitate that components are reorganized. There is a need for secure and efficient approaches of reorganizing the service components, with trust to support collaboration related requirements. Through iterative design, a reliable and trust-aware re-organization of cloud components – the Collaboration cloud architecture is achieved. We utilize SOA, Privacy-By-Design principles and insights from blockchain to enforce trust. We illustrate its potential with multi-layer security process flow based on server-to-client tokens, Role-based Access Control and the traditional authentication – username and password with assurance for privacy and trust. The architecture shows promise towards data governance and the overall management of internal or external data flows.
A detail survey of page re ranking various web features and techniquesijctet
This document discusses techniques for page re-ranking on websites based on user behavior analysis. It describes how web usage mining involves analyzing web server logs to extract patterns in user behavior. Common techniques discussed for page re-ranking include Markov models, data mining approaches like clustering and association rule mining, and analyzing linked web page structures. The goal is to better understand user interests and predict future page access to improve information retrieval and optimize website design.
The document discusses using Web Oriented Architecture (WOA) principles and technologies to improve transparency, collaboration, and information sharing through publishing and linking government data on the Web. It describes exposing raw data and semantically enriched structured data as public records. Technologies that enable interoperability across disparate data sources for large-scale data federation are also described. Finally, the applicability of the proposed solution architecture to existing frameworks is discussed.
IDENTIFYING IMPORTANT FEATURES OF USERS TO IMPROVE PAGE RANKING ALGORITHMSIJwest
Web is a wide, various and dynamic environment in which different users publish their documents. Webmining is one of data mining applications in which web patterns are explored. Studies on web mining can be categorized into three classes: application mining, content mining and structure mining. Today, internet has found an increasing significance. Search engines are considered as an important tool to respond users’ interactions. Among algorithms which is used to find pages desired by users is page rank algorithm which ranks pages based on users’ interests. However, as being the most widely used algorithm by search engines including Google, this algorithm has proved its eligibility compared to similar algorithm, but considering growth speed of Internet and increase in using this technology, improving performance of this algorithm is considered as one of the web mining necessities. Current study emphasizes on Ant Colony algorithm and marks most visited links based on higher amount of pheromone. Results of the proposed algorithm indicate high accuracy of this method compared to previous methods. Ant Colony Algorithm as one of the swarm intelligence algorithms inspired by social behavior of ants can be effective in modeling social behavior of web users. In addition, application mining and structure mining techniques can be used simultaneously to improve page ranking performance.
In this world of information technology, everyone has the tendency to do business electronically. Today
lot of businesses are happening on World Wide Web (WWW), it is very important for the website owner to
provide a better platform to attract more customers for their site. Providing information in a better way is
the solution to bring more customers or users. Customer is the end-user, who accessing the information
in a way it yields some credit to the web site owners. In this paper we define web mining and present a
method to utilize web mining in a better way to know the users and website behaviour which in turn
enhance the web site information to attract more users. This paper also presents an overview of the
various researches done on pattern extraction, web content mining and how it can be taken as a catalyst
for E-business.
An agent-based collection development system is proposed that uses software agents to perform tasks related to selecting, acquiring, recording, and disseminating information for collection development. The system would use various agents like a Profile Agent to analyze faculty webpages and develop research interest profiles, a Citation Agent to identify publications and topics, a Search Agent to search resources and identify relevant works, and other agents to check the catalog, monitor usage, process interlibrary loans, and handle acquisitions. The agent system could automate many routine collection development tasks currently performed by librarians and help libraries better support faculty research needs.
MULTIFACTOR NAÏVE BAYES CLASSIFICATION FOR THE SLOW LEARNER PREDICTION OVER M...ijcsa
The high school students must be observed for their slow learning or quick learning abilities to provide
them with the best education practices. Such analysis can be perfectly performed over the student
performance data. The high school student data has been obtained from the schools from the various
regions in Punjab, a pivotal state of India. The complete student data and the selective data of almost 1300
students obtained from one school in the regions has been undergone the test using the proposed model in
this paper. The proposed model is based upon the naïve bayes classification model for the data
classification using the multi-factor features obtained from the input dataset. The subject groups have been
divided into the two primary groups: difficult and normal. The classification algorithm has been applied
individually over data grouped in the various subject groups. Both of the early stage classification events
have produced the almost similar results, whereas the results obtained from the classification events over
the averaging factors and the floating factors told the different story than the early stage classification. The
proposed model results have shown that the deep analysis of the data tells the in-depth facts from the input
data. The proposed model can be considered as the effectiv
The International Journal of Database Management Systems is a bi-monthly peer-reviewed journal that publishes articles on database management systems and their applications. The goal is to bring together researchers and practitioners to share new results and establish collaborations. Topics of interest include data integration, privacy, mining, architectures, and more. Authors are invited to submit original papers by the deadline of April 28, 2019.
Semantic Web concepts used in Web 3.0 applicationsIRJET Journal
This document discusses how semantic web concepts can be used in applications for Web 3.0. It provides examples of how semantic web could be applied in areas like medical sciences, search engines, and e-learning. Specifically, it describes how semantic web could help integrate medical data from different sources, enable more accurate medical diagnosis and treatment recommendations based on patient history. It also discusses how a semantic search engine like Swoogle works by tagging web pages with metadata to better understand context and return more relevant search results. Finally, it touches on how a semantic web architecture could enable more sophisticated e-learning systems by linking educational resources.
IRJET- A Workflow Management System for Scalable Data Mining on CloudsIRJET Journal
1. The document discusses a workflow management system for scalable data mining on clouds. It proposes using MapReduce and Hadoop frameworks to parallelize k-means clustering of large datasets on cloud infrastructure.
2. The system aims to improve efficiency, security, and transmission speed over existing cloud systems by generating hash codes for files before classification and storage on cloud. It uses deduplication to avoid redundant uploads.
3. The document outlines the system implementation including user modules for registration, login, profile editing, training data upload, file upload and download while avoiding redundancy, and changing/logging out of passwords. It also discusses testing the system functionality using unit testing libraries.
1. The document describes a search engine scraper that extracts data from websites, summarizes the extracted information, and converts it into a relevant result for users.
2. The search engine scraper works in three stages: extraction of data from website content, summarization of the extracted data using natural language processing techniques, and conversion of the summarized data into a meaningful format for users.
3. The summarization stage uses natural language toolkit processing libraries to determine sentence similarity, assign weights to sentences, and select sentences with higher ranks to include in the summary.
Service Level Comparison for Online Shopping using Data MiningIIRindia
The term knowledge discovery in databases (KDD) is the analysis step of data mining. The data mining goal is to extract the knowledge and patterns from large data sets, not the data extraction itself. Big-Data Computing is a critical challenge for the ICT industry. Engineers and researchers are dealing with the cloud computing paradigm of petabyte data sets. Thus the demand for building a service stack to distribute, manage and process massive data sets has risen drastically. We investigate the problem for a single source node to broadcast the big chunk of data sets to a set of nodes to minimize the maximum completion time. These nodes may locate in the same datacenter or across geo-distributed data centers. The Big-data broadcasting problem is modeled into a LockStep Broadcast Tree (LSBT) problem. And the main idea of the LSBT is defining a basic unit of upload bandwidth, r, a node with capacity c broadcasts data to a set of [c=r] children at the rate r. Note that r is a parameter to be optimized as part of the LSBT problem. The broadcast data are further divided into m chunks. In a pipeline manner, these m chunks can then be broadcast down the LSBT. In a homogeneous network environment in which each node has the same upload capacity c, the optimal uplink rate r, of LSBT is either c=2 or 3, whichever gives the smaller maximum completion time. For heterogeneous environments, an O(nlog2n) algorithm is presented to select an optimal uplink rate r, and to construct an optimal LSBT. With lower computational complexity and low maximum completion time, the numerical results shows better performance.The methodology includes Various Web applications Building and Broadcasting followed by the Gateway Application and Batch Processing over the TSV Data after which the Web Crawling for Resources and MapReduce process takes place and finally Picking Products from Recommendations and Purchasing it.
Resource Consideration in Internet of Things A Perspective Viewijtsrd
The ubiquitous computing and its applications at different levels of abstraction are possible mainly by virtualization. Most of its applications are becoming pervasive with each passing day and with the growing trend of embedding computational and networking capabilities in everyday objects of use by a common man. Virtualization provides many opportunities for research in IoT since most of the IoT applications are resource constrained. Therefore, there is a need for an approach that shall manage the resources of the IoT ecosystem. Virtualization is one such approach that can play an important role in maximizing resource utilization and managing the resources of IoT applications. This paper presents a survey of Virtualization and the Internet of Things. The paper also discusses the role of virtualization in IoT resource management. Rishikesh Sahani | Prof. Avinash Sharma ""Resource Consideration in Internet of Things: A Perspective View"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23694.pdf
Paper URL: https://www.ijtsrd.com/computer-science/world-wide-web/23694/resource-consideration-in-internet-of-things-a-perspective-view/rishikesh-sahani
Review on an automatic extraction of educational digital objects and metadata...IRJET Journal
This document reviews systems for automatically extracting educational digital objects (EDOs) and metadata from institutional websites. It discusses existing systems like Agathe, Crossmarc, and CiteSeerX that extract information from documents but not linked web pages. The proposed system would crawl a website, extract and classify EDOs into audio, video and text, and store them in a database along with automatically extracted metadata like title, category, author. This aims to assist repositories in identifying documents that could be uploaded along with their metadata.
Trend-Based Networking Driven by Big Data Telemetry for Sdn and Traditional N...josephjonse
This summarizes a research paper on developing an intelligent system that leverages big data telemetry analysis to enable trend-based networking decisions. The system collects data from traditional and SDN networks using SNMP and OpenFlow. Logstash filters the data and sends it to PNDA's Kafka interface. A Jupyter notebook streams the data to OpenTSDB. Benchmarking identifies trends, and the system takes automated action like load balancing across the networks when trends are detected. A GUI provides centralized management. The system demonstrates using big data analytics to monitor networks and make proactive, automated decisions based on observed trends.
TREND-BASED NETWORKING DRIVEN BY BIG DATA TELEMETRY FOR SDN AND TRADITIONAL N...ijngnjournal
Organizations face a challenge of accurately analyzing network data and providing automated action
based on the observed trend. This trend-based analytics is beneficial to minimize the downtime and
improve the performance of the network services, but organizations use different network management
tools to understand and visualize the network traffic with limited abilities to dynamically optimize the
network. This research focuses on the development of an intelligent system that leverages big data
telemetry analysis in Platform for Network Data Analytics (PNDA) to enable comprehensive trendbased networking decisions. The results include a graphical user interface (GUI) done via a web
application for effortless management of all subsystems, and the system and application developed in
this research demonstrate the true potential for a scalable system capable of effectively benchmarking
the network to set the expected behavior for comparison and trend analysis. Moreover, this research
provides a proof of concept of how trend analysis results are actioned in both a traditional network and
a software-defined network (SDN) to achieve dynamic, automated load balancing.
Trend-Based Networking Driven by Big Data Telemetry for Sdn and Traditional N...josephjonse
Organizations face a challenge of accurately analyzing network data and providing automated action based on the observed trend. This trend-based analytics is beneficial to minimize the downtime and improve the performance of the network services, but organizations use different network management tools to understand and visualize the network traffic with limited abilities to dynamically optimize the network. This research focuses on the development of an intelligent system that leverages big data telemetry analysis in Platform for Network Data Analytics (PNDA) to enable comprehensive trendbased networking decisions. The results include a graphical user interface (GUI) done via a web application for effortless management of all subsystems, and the system and application developed in this research demonstrate the true potential for a scalable system capable of effectively benchmarking the network to set the expected behavior for comparison and trend analysis. Moreover, this research provides a proof of concept of how trend analysis results are actioned in both a traditional network and a software-defined network (SDN) to achieve dynamic, automated load balancing
there is an obvious trend toward the future Internet
environment, where billions of sensors are connected in a highly
dynamic environment to provide fine-grained information about
the physical world. Applications that utilize such information are
smart and belong to the Internet of Things (IOT) paradigm.
There is a need for IOT systems to have a higher understanding
of the situations in which to provide services or functionalities, to
adapt accordingly. Information context should be collected,
modeled, inferred, and distributed to serve IOT applications with
the required knowledge. Developing context-awareness IOT
applications demands a specialized framework that helps in
applying computations transparently on the cloud and feed the
context-awareness IOT applications with appropriate context
information to prevent the direct access of the context sources.
In this paper, a framework for developing context-aware IOT
applications is proposed. The framework is distributed,
autonomous, and adaptive to fit the IOT applications’
characteristics. The proposed framework utilizes agent’s
technology as well as semantic-based features to provide the basic
functionality required to develop a context-awareness IOT
application. A scenario for elder healthcare is explained to
evaluate this framework.
Campus realities: forecasting user bandwidth utilization using Monte Carlo si...IJECEIAES
This document describes a study that used Monte Carlo simulation to forecast user bandwidth utilization in a campus network. The study collected traffic data from a test campus network setup over 30 days. It analyzed the data to determine bandwidth usage patterns and categorized usage. It then used the data to develop a Monte Carlo simulation model that generated random numbers based on the actual usage data probability distribution. This allowed the model to forecast bandwidth utilization for different days, helping network planners understand demands and design network upgrades accordingly. The model and results help campus networks better plan for high and normal traffic loads to deliver content.
IRJET- Secure Re-Encrypted PHR Shared to Users Efficiently in Cloud ComputingIRJET Journal
This document proposes a Securely Re-Encrypted PHR Shared to Users Efficiently in Cloud Computing (SeSPHR) system. The SeSPHR system aims to securely store and share patients' Personal Health Records (PHRs) with authorized entities in the cloud while preserving privacy. It encrypts PHRs stored on untrusted cloud servers and only allows verified users access using re-encryption keys from a semi-trusted proxy server. The system enforces patient-centric access management of PHR components based on access levels and supports dynamic addition and removal of authorized users. The operation of SeSPHR was analyzed and verified using High-Level Petri Nets, SMT-Lib and Z3 solver. Performance analysis
Web Services Based Information Retrieval Agent System for Cloud ComputingEditor IJCATR
This document proposes a framework for an information retrieval agent system using web services in a cloud computing environment. The system would allow users to search for medical information from a private medical cloud. Hospitals and clinics in the cloud provide web services with information about specialists and doctors. The proposed multi-agent system would let patients search for information by day, time, doctor name, clinic, or disease. It would include interface, information, and reasoning agents. The interface agent would receive queries, format them, pass them to the information agent, and return results. The information agent would search cloud databases and retrieve results. The reasoning agent would analyze results and return the most relevant to the interface agent.
Analysing Transportation Data with Open Source Big Data Analytic Toolsijeei-iaes
This document discusses analyzing transportation data using open source big data analytic tools. It provides an overview of H2O and SparkR, two popular tools. It then demonstrates applying these tools to a transportation dataset, using a generalized linear model. Specifically, it shows importing and splitting the data, building a GLM model with H2O and SparkR, making predictions on test data, and comparing predicted versus actual values. The document provides examples of the coding and outputs at each step of the analysis process.
A HEALTH RESEARCH COLLABORATION CLOUD ARCHITECTUREijccsa
Cloud computing platforms used for research collaborations within public health domains are limited in terms of how service components are organized and provisioned. This challenge is intensified by platform level challenges of transparency, confidentiality, privacy and trust. Addressing these collaboration issues will necessitate that components are reorganized. There is a need for secure and efficient approaches of reorganizing the service components, with trust to support collaboration related requirements. Through iterative design, a reliable and trust-aware re-organization of cloud components – the Collaboration cloud architecture is achieved. We utilize SOA, Privacy-By-Design principles and insights from blockchain to enforce trust. We illustrate its potential with multi-layer security process flow based on server-to-client tokens, Role-based Access Control and the traditional authentication – username and password with assurance for privacy and trust. The architecture shows promise towards data governance and the overall management of internal or external data flows.
A HEALTH RESEARCH COLLABORATION CLOUD ARCHITECTUREijccsa
Cloud computing platforms used for research collaborations within public health domains are limited in terms of how service components are organized and provisioned. This challenge is intensified by platform level challenges of transparency, confidentiality, privacy and trust. Addressing these collaboration issues will necessitate that components are reorganized. There is a need for secure and efficient approaches of reorganizing the service components, with trust to support collaboration related requirements. Through
iterative design, a reliable and trust-aware re-organization of cloud components – the Collaboration cloud architecture is achieved. We utilize SOA, Privacy-By-Design principles and insights from blockchain to
enforce trust. We illustrate its potential with multi-layer security process flow based on server-to-client tokens, Role-based Access Control and the traditional authentication – username and password with assurance for privacy and trust. The architecture shows promise towards data governance and the overall management of internal or external data flows.
Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...Editor IJAIEM
Dr.G.Anandharaj1, Dr.P.Srimanchari2
1Associate Professor and Head, Department of Computer Science
Adhiparasakthi College of Arts and Science (Autonomous), Kalavai, Vellore (Dt) -632506
2 Assistant Professor and Head, Department of Computer Applications
Erode Arts and Science College (Autonomous), Erode (Dt) - 638001
ABSTRACT
In unpredictable increase in mobile apps, more and more threats migrate from outmoded PC client to mobile device. Compared
with traditional windows Intel alliance in PC, Android alliance dominates in Mobile Internet, the apps replace the PC client
software as the foremost target of hateful usage. In this paper, to improve the confidence status of recent mobile apps, we
propose a methodology to estimate mobile apps based on cloud computing platform and data mining. Compared with
traditional method, such as permission pattern based method, combines the dynamic and static analysis methods to
comprehensively evaluate an Android applications The Internet of Things (IoT) indicates a worldwide network of
interconnected items uniquely addressable, via standard communication protocols. Accordingly, preparing us for the
forthcoming invasion of things, a tool called data fusion can be used to manipulate and manage such data in order to improve
progression efficiency and provide advanced intelligence. In this paper, we propose an efficient multidimensional fusion
algorithm for IoT data based on partitioning. Finally, the attribute reduction and rule extraction methods are used to obtain the
synthesis results. By means of proving a few theorems and simulation, the correctness and effectiveness of this algorithm is
illustrated. This paper introduces and investigates large iterative multitier ensemble (LIME) classifiers specifically tailored for
big data. These classifiers are very hefty, but are quite easy to generate and use. They can be so large that it makes sense to use
them only for big data. Our experiments compare LIME classifiers with various vile classifiers and standard ordinary ensemble
Meta classifiers. The results obtained demonstrate that LIME classifiers can significantly increase the accuracy of
classifications. LIME classifiers made better than the base classifiers and standard ensemble Meta classifiers.
Keywords: LIME classifiers, ensemble Meta classifiers, Internet of Things, Big data
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
An Extensible Web Mining Framework for Real KnowledgeIJEACS
With the emergence of Web 2.0 applications that bestow rich user experience and convenience without time and geographical restrictions, web usage logs became a goldmine to researchers across the globe. User behavior analysis in different domains based on web logs has its utility for enterprises to have strategic decision making. Business growth of enterprises depends on customer-centric approaches that need to know the knowledge of customer behavior to succeed. The rationale behind this is that customers have alternatives and there is intense competition. Therefore business community needs business intelligence to have expert decisions besides focusing customer relationship management. Many researchers contributed towards this end. However, the need for a comprehensive framework that caters to the needs of businesses to ascertain real needs of web users. This paper presents a framework named eXtensible Web Usage Mining Framework (XWUMF) for discovering actionable knowledge from web log data. The framework employs a hybrid approach that exploits fuzzy clustering methods and methods for user behavior analysis. Moreover the framework is extensible as it can accommodate new algorithms for fuzzy clustering and user behavior analysis. We proposed an algorithm known as Sequential Web Usage Miner (SWUM) for efficient mining of web usage patterns from different data sets. We built a prototype application to validate our framework. Our empirical results revealed that the framework helps in discovering actionable knowledge.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Similar to Designing and configuring context-aware semantic web applications (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
International Conference on NLP, Artificial Intelligence, Machine Learning an...
Designing and configuring context-aware semantic web applications
1. TELKOMNIKA Telecommunication, Computing, Electronics and Control
Vol. 18, No. 5, October 2020, pp. 2549~2559
ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018
DOI: 10.12928/TELKOMNIKA.v18i5.15277 2549
Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA
Designing and configuring context-aware semantic
web applications
Haider Hadi Abbas 1
, Suha Sahib Oleiwi2
, Haider Rasheed Abdulshaheed3
1
Computer Technology Engineering Department, Al-Mansour University College, Iraq
2
Department of Computers, Ministry of Higher Education, Iraq
3
Baghdad College of Economic Sciences University, Iraq
Article Info ABSTRACT
Article history:
Received Jan 11, 2020
Revised Mar 15, 2020
Accepted Apr 14, 2020
Context-aware services are attracting attention of world as the use of web
services are rapidly growing. We designed an architecture of context-aware
semantic web which provides on demand flexibility and scalability in
extracting and mining the research papers from well-known digital libraries
i.e. ACM, IEEE and SpringerLink. This paper proposes a context-aware
administrations system, which supports programmed revelation and
incorporation of setting dependent on Semantic Web administrations. This
work has been done using the python programming language with a dedicated
library for the semantic web analysis named as “Cubic-Web” on any defined
dataset, in our case as we have used a dataset for extracting and studying
several publications to measure the impact of context aware semantic web
application on the results. We have found the average recall and averge
accuracy for all the context aware research journals in our research work.
Moreover, as this study is limited journal documents, other future studies can
be approached by examining different types of publications using this advance
research. An efficient system has been designed considering the parameters of
research article meta-data to find out the papers from the web using semantic
web technology. Parameters like year of publication, type of publication,
number of contributors, evaluation methods and analysis method used in
publication. All this data has been extracted using the designed context-aware
semantic web technology.
Keywords:
Context-aware
Research publications
Searching
Semantic web services
This is an open access article under the CC BY-SA license.
Corresponding Author:
Haider Rasheed Abdulshaheed,
Baghdad College of Economic Sciences University,
Baghdad, Iraq.
Email: haider252004@yahoo.com
1. INTRODUCTION
Context information is information about the objects available to a user and the user's location,
activities, and status [1]. Context-aware systems collect such context information from various sensors and
devices, recognize the surrounding situation by the user on a particular algorithm, and determine the necessary
useful information to provide to users [2]. Through the collection and exchange of context information,
the context-aware service can provide appropriate services to the user based on analysis and reasoning with
specific algorithms [3]. Emerged in the mid-1990s era of ubiquitous computing, context-aware services are
recognized as a leading service, but there were virtually no actual commercial services or killer app. Then after
the advent of the iPhone, the development of smart phones has grown exponentially, and numerous
location-based services emerged. People are now becoming accustomed to use their location information and
2. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 5, October 2020: 2549 - 2559
2550
status information in location-based services. However, many location-based services are duplicated, and
context information is not specifically being used except for the location of information. While a context-aware
service model encompassing the location-based services is required, the existing location-based services and
context aware services have been developed independently in each domain based on the individual models.
Consequently, context information was not efficiently utilized in context-aware services. Therefore, there is
a need for the sharing and exchange of context information for the enhancement of efficiency. Moreover,
the production regarding the higher level of information can be possible by combining context information.
In other words, an integrated framework that provides context-aware services across multiple domains and
services by combining information obtained from a user as well as the information surrounding the user is
required. For example, while the existing health-related context-aware service was limited to providing only
fragmentary information, the framework proposed in this paper is expected to have the ability to provide
comprehensive health-related services by combining a user's biometric information and a variety of information
around the user such as pollution or dust. Through more comprehensive and accurate health information, it is
possible to provide diagnosis and accident prevention to the elderly and patients, and recommend emergency
facilities and amenities suitable for their situation.
So as to grow such a system, as an initial step, context-aware administrations and context-aware data
ought to be isolated. From that point onward, the portrayal of setting data and administrations ought to be
institutionalized. Semantic Web and Semantic Web administrations bolster such institutionalization. Also,
Semantic Web administrations bolster programmed administration disclosure and mix, which empower
the programmed mix of setting. The current context-aware administrations are for the most part restricted to
a specific scope of administrations in light of the fact that the scope of data is constrained to the data gathered
by the administration itself. Be that as it may, our structure consolidates setting data from an assortment of
sources and empowers thorough context-aware administrations. Additionally, a definitive objective is to help
the programmed disclosure and mix of setting data. For the trading of setting data with a manual elucidation
of substance about setting data, syntactic interoperability dependent on Web administrations system
is adequate. Be that as it may, semantic interoperability dependent on Semantic Web advances is basic for
the programmed preparing of setting data. In this manner, we apply Semantic Web administrations, which
coordinate Semantic Web and Web administrations to context-aware administrations that require
the programmed trade of setting data. The examination has been done on real inquire about distribution diaries
including following digital libraries:
− SpringerLink
− ACM portal digital library
− ScienceDirect
− IEEE xplore digital library
In the development of this framework, the core is the system that supports context collection from
a variety of research journals and integration of the research journals for a generation of high-level context
information as shown by the major research journals in Figure 1.
Figure 1. False positive distribution of major research journals and note that none of
the libraries offered looking through information utilizing semantic web methodology [3]
What's more, by isolating the context-aware applications and setting data, a context-aware application
shouldn't be reliant on explicit setting suppliers on the grounds that the determination and mix of setting from
an assortment of setting suppliers would be conceivable. In this investigation, we connected ontologies and
3. TELKOMNIKA Telecommun Comput El Control
Designing and configuring context-aware semantic web applications… (Haider Hadi Abbas)
2551
Semantic Web administrations to a context-aware structure so as to empower the programmed disclosure and
joining of setting data and administrations as per the changing conditions of the client as setting information
may change quickly.
Problem statement
The problem that comprises of constructing an effieicint semantic web-based system that extracts
the context-aware research papers from well-known journals along with its catagories. We likely proposed
a context-aware administration structure that gives progressively unavoidable and general administrations
dependent on shared transparency and setting data trade inside the system past the breaking points of free
benefits in a particular field. An effective system has been designed considering the following parameters of
research article meta-data to find the papers from the web using semantic web technology.
− The year and sort of publication.
− To analyse the different parameters of published papers with semantic web.
− How the published paper gathered information.
− Whether measurable examination was utilized or not using semantic web.
− The explore assessment approach.
Contributions
The main contributions of this work involve improving the results of previous work in topic modeling
and query expansion for publications in three context aware research journals i.e. ACM, IEEE and
SpringerLink in the open source dataset. In addition, the research also involves applying the work to published
documents parameters since only a few works have tackled this topic so far.
The main contributions of this dissertation are to:
− Demonstrate that classifying the different publications with meaningful descriptive information using
contex aware semantic indexing which can improve the results of existing applied methods to publication
documents in different journals.
− Test if topic context-aware model can describe the semantics of a published publication without explicit
semantics.
− Test if using python framework “Cubic-Web” has higher accuracy than Latent Semantic Indexing for
publication documents which has already been proved.
− Test if the dataset of different published publications in different journals has high average recall than
average precision using the “Cubic-Web” framework for the semantic web based on the ontology decisions
wih both internal and external factors.
− Combine the internal ontology drivers and external ontology drivers and apply to different extracted
publication documents using “Cubic-Web” framework.
2. BACKGROUND
2.1. Context information and context-aware services
Context is characterized as "the interrelated conditions where something exists or happens" in
Merriam-Webster's Collegiate Dictionary [3]. The meaning of setting is to some degree diverse between
researchers. Setting is characterized as the client's area, condition, personality and time [4] or the client's
passionate state, focal point of consideration, area and direction, date and time, articles and individuals in
the client's condition [5]. Additionally, it is characterized as the components of the client's condition, which
the PC thinks about [6]. In our exploration, setting implies the data which is given in understanding the client's
conditions or circumstance. Omnipresent data implies all data encompassing the conditions of the client in
a wide sense, yet setting methods just the data required by the client from the pervasive data. At the end of
the day, context-aware frameworks identify and select ever vital data from a lot of universal data around
the client so as to give context-aware administrations [7].
Context-aware figuring is characterized as programming that is versatile to place of utilization,
individuals and articles around, and can oblige these progressions of items after some time at the same
time [8]. After the definition, there were various definitions for context-aware processing, however every ha
an accentuation on explicit parts. For the execution of context-aware processing, it is attractive that
administrations respond explicitly to their ecological qualities, for example, current area and, adjust their
conduct as indicated by the changing conditions [9]. As the exertion of grouping setting data, there was an
examination posting the encompassing conditions of the client for setting data, for example, the character of
the client, time, season, and temperature [10].
The following studies gave properties for context information or classified context information
according to the characteristics of sensors, which collect the context. In recent years, ontology has been used
to define context information in many studies as shown in Figure 2. However, the existing definitions of context
information have many problems for the interoperation between context-aware applications while those are
4. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 5, October 2020: 2549 - 2559
2552
appropriate for the development of independent context-aware applications. For example, context information
used in one application might not be able to be used in other applications because the context information is
dependent on a specific application.
Figure 2. The classical search models based on which the user extracts
the results by query search and query refinement from a search engine [11]
2.2. Ontology-based context-aware framework
Ontology-based context-aware models were developed because they support easy sharing and mutual
use of context in the first stage of collecting context information [12]. GAIA [13], CoBrA (Context Broker
Architecture) [14] and SOCAM (Service-Oriented Context-Aware Middleware) [15] are examples of
the ontology-based context-aware architectures. All three frameworks have the merit of information sharing,
reusability, reasoning, and interoperability by using ontology.
Firstly, GAIA (Context Infrastructure) [16] is a context-aware service structure that allows a provider
to obtain various context information and reason over it. The context provider in the structure of Gaia that
collects context and provides the context to applications. The Context Synthesizer makes inferences with
collected context from the Context Provider, and provides it to applications in abstract form. The context
provider lookup service provides services that search for the Context Provider. The context history is
a database, which records previous context information, and the Context Consumer is an application that
uses context.
SOCAM (Service-Oriented Context-aware Middleware) [17] is a setting mindful design for portable
administrations. SOCAM assumes the job of Context translator by utilizing a focal server. The focal server
gathers and procedures setting through the Context Provider, and gives it to customers. Setting mindful
versatile administrations are situated at the highest point of this design, so they can utilize the ideal degree of
setting data accessible and adjust their conduct as needs be.
CoBrA (Context Broker Architecture) is the intercession structure that supports setting mindful
processing in wise space. Wise space alludes to the physical space where insightful frameworks are introduced.
In the focal point of CoBrA, there is a canny setting handle that oversees and keeps up a mutual setting model.
The individuals who utilize cell phones can utilize the models as applications.
2.3. Semantic web services
Semantic web services are Web administrations which embrace Semantic Web innovations so as to
semantically use a lot of Web benefits in the Internet. Semantic web services innovation points towards
programmed disclosure, reconciliation, intercession and execution by including semantic depiction Web
administrations. By using semantic web services, it is conceivable to scan for and consolidate Web benefits
that are appropriate for the client's motivation. For instance, if a client solicitations administrations for a work
excursion to Turkey, semantic web services innovation can consolidate and give the client the fitting required
administrations, for example, carrier reservations, inn reservations, and open transportation with respect to
the goal relying upon the client's profile [18]. As it were, semantic web services give a reasonable model and
dialects to unequivocally portray the pertinent information and conduct semantics of Web administrations [19].
5. TELKOMNIKA Telecommun Comput El Control
Designing and configuring context-aware semantic web applications… (Haider Hadi Abbas)
2553
Semantic web services translate and utilize the semantics that are encoded expressly and officially in
the administration depiction so as to consequently find, summon, form and execute web administrations [20].
Semantic web technologies and services as shown in Figure 3 are for the semantic markup with respect
to the portrayal of web administrations. It comprises of primary parts which are administration profile, process
model and establishing. The administration profile, which tells "what the adjusted does" is utilized to publicize
the administration and the procedure model, which advises how to utilize the administration gives a point by
point depiction of an administration's activity. Also, the establishing, which advises how to interoperate with
an administration indicates correspondence convention, message configurations, and different subtleties.
In this examination, we depicted setting mindful administrations dependent on semantic web, so as to help
programmed revelation, combination, intervention and execution of setting mindful administrations.
As of late, a lightweight semantic structure for semantic web administrations was proposed [21].
It comprises of WSMO-Lite metaphysics, MicroWSMO and hRESTS. WSMO-Lite presents lightweight
semantic portrayals for administrations on the web, hRESTS is a small scale design which clarifies RESTful
web administrations, and MicroWSMO gives a technique to semantically comment on web benefits
by expanding hRESTS. The value of the lightweight system is that it bolsters RESTful web administrations
which take over half of all out web administration conventions. Additionally, it is anything but difficult to
comment on web administrations on the grounds that hRESTS and MicroWSMO are straightforward and
simple to utilize.
Figure 3. Technology map for semantic web services with several applications, there are various applications
for context-aware processing, however every semantic web-based application has accentuation
on explicit semantic web service [22]
3. METHODOLOGY
In this study, a structure which isolates the context provider and the context-aware service is proposed
for the following reasons: Firstly, the redundant generation of context information is not efficient and difficult
to reuse. Secondly, the user should have a choice of selecting a context-aware service in a situation where
a number of context-aware services are operated, and it can improve the overall efficiency. For example,
information about bus location and arrival time is provided by the government through open API, and
applications provide various services in various platforms such as iPhones, Android phones, and personal
computers by using the information. A user can choose an application which is appropriate for the user’s
situation. We want to investigate the following issues through our research in semantic web:
− To research whether quality contrasts give a clarification to contrasts in study results.
− As a method for weighting the significance of individual examinations when results are being incorporated.
− To control the understanding of discoveries and decide the quality of surmisings.
− To manage proposals for further explore.
3.1. Implementation
In order to implement the structure described above, sensors, context providers and context-aware
services should be separated in different layers so they can operate independently. Therefore, as shown in
Figure 4, the framework of this study consists of the context service layer, public context layer, and physical
sensor layer on the entire hierarchy of the three tiers.
6. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 5, October 2020: 2549 - 2559
2554
The physical sensor layer, which recognizes very basic information, processes the information from
physical sensors outside individually because it depends on the sensors. That is, each context provider directly
collects sensor data from sensors. The representation of the sensor dependent context is not the scope of this
study. The context sensing layer is similar to this layer, but the difference uses web services to obtain
information from virtual sensors. The flowchart diagram explains the input sources, processes and output
sources as shown in Figure 5 with the ontologies and semantic analysis in order to ectract the published
documents forms sensor information from physical sensors into institutionalized setting structures and gives
the setting data dependent on web administrations.
Figure 4. Context mediation framework which include three layers of abstraction firstly
context service layer followed by public context layer with physical sensor layer for
the semantic context information for different publications
Figure 5. The flow diagram for extracting the publication documents which includes input source as
a dataset after that collecting dataset for the preprocessing to differentiate the known and unknown events
for the ontologies and semantic analysis in order to ectract the published documents
While a fundamental setting supplier just gathers sensor information from physical sensors,
a consolidated setting supplier gathers from physical sensors as well as other setting suppliers. The joined
context supplier can get sensor information from either just essential setting suppliers or from just other
consolidated setting suppliers so as to make our engineering to be extensible. The setting data has
the receptiveness and interoperability since it is given dependent on web administrations. As shown in
Figure 6, a context aware semantic web-based architecture for extracting the publication papers from digital journals
using the data-layer repositories and the structure of the combined context provider supports the handling of context
from physical sensors.
7. TELKOMNIKA Telecommun Comput El Control
Designing and configuring context-aware semantic web applications… (Haider Hadi Abbas)
2555
In order to mediate context information based on Web services, a UDDI-based structure to retrieve
desired context information and a representation model to express WSDL-based metadata using ontology are
required because the existing WSDL and UDDI are not sufficient to represent the semantics for context
information. In addition, the SOAP-based messaging protocol should be designed in order to exchange context
information. The representation and protocols based on web services ensures syntactic interoperability, so
common ontology is required to ensure semantic interoperability. In other words, when a context-aware service
processes context information from a context provider, a module that semantically recognizes the context is
required. By applying semantic web service architecture, the semantic interoperability can be ensured in our
approach. In proposed architecture, it is supported in the middleware level, but there is a limitation that it is
only possible in the applications which conforms to the standard.
Figure 6. A context aware semantic web-based architecture for extracting the publication papers
from digital journals using the data-layer repositories, however the architecture
contains four different layers of abstraction [23]
3.2. Architecture
In the context service layer, context-aware services receive context information from the context provider
based on web services, and provide services to users. The discovery, integration, mediation and execution
of context information for context-aware services could be automatically done in semantic web services.
Structure of context provider for context aware research which conbines the context of published papers
with the help of contect interpreter. In order to achieve this, it is required to define the standards for
context representation based on ontology for compatibility, and an ontology-based interpreter is also
required. Interpretation is possible by matching their own ontology and external ontology, so the semantic
web ontology matching techniques are required. Otherwise, using a common ontology or upper ontology
for ontology matching is also possible in the semantic web services environment. The role of context
interpreter is matching contexts from different readers by referencing internal ontology and external
ontology. Combined context generator combines the contexts by using context interpreter, and provides
them to context-aware services or other combined context providers. The architecture plays an important
role as It comprises of three primary parts which are administration profile, process model and
establishing [24]. The data layer which extracts the data repositories which are utilized to publicize
the administration and the procedure model, which advises how to utilize the administration gives a point
by point depiction of an administration's activity. Figure 7 shows the Structure of context provider for
context aware research which conbines the context of published papers with the help of contect interpreter,
Also, the establishing, which advises how to interoperate with an administration indicates correspondence
convention, message configurations, and different subtleties. In this examination, we depicted setting
mindful administrations dependent on semantic web analysis so as to help programmed revelation,
combination, intervention and execution of setting mindful administrations.
8. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 5, October 2020: 2549 - 2559
2556
Figure 7. Structure of context provider for context aware research which conbines
the context of published papers with the help of contect interpreter [25]
The same issues of syntactic and semantic interoperability with context providers exist, and semantic
web services can be used for the issues. In the architecture of context-aware Service, context-aware service
processor provides various customized services to users through web services by using user preferences and
the integrated contexts from context interpreter.
3.3. Training on dataset
This work has been done using the python programming language with a dedicated library for
the semantic web analysis named as “Cubic-Web” on any defined dataset, in our case as we have used a dataset
for extracting and studying several publications to measure the impact of context aware semantic web
application on the results in Table 1 Moreover, as this study is limited journal documents, other future studies
can be approached byexamining differenttypesofpublications using this advance research. Furthermore, future
studies can use specific types of publications on different research journals such as medical, politics, or
economic which might yield different results.The dataset has been acquired from the link:
https://www.nature.com/sdata/.
Table 1. Different publication articles in datasetbeing acquired from the scientific data website [26]
Dataset Type Journals No. of Articles No. of users No. of published articles
Context Aware IEEE. ACM, SpringerLink, Science
Direct
4879148 25255450 855816
Context Unaware Plos One, New Scientist, BMJ, JAMA 371803 1012177 23607
TOTAL 5250951 26267627 879423
3.4. Context aware ontology decisions
In this study, we constructed an ontology for context information by expanding the sensor ontology
presented by different research journals and we mined the results in result section based on the concepts of
ontology drivers as shown with the internal and external applicability on ontology decision as shown in
Figure 8 ,The ontology is designed to apply semantic web services to context-aware services. In the hierarchy,
ontology decision which represents context information, is connected to different five different parameters of
both internal and external ontology drivers with the relationship provides. It means that context is provided by
the web semantic application and existing systems. Context information represented with context is also
connected to the ontology decisions. The use cases are designed by expanding the internal ontology driver for
finding and extracting the research papers. Context is constrained by research input and its effects: Constraints
in our ontology just as normal constraits and research constraints that are constrained by research sensors and
its decision for extracting the published papers.
9. TELKOMNIKA Telecommun Comput El Control
Designing and configuring context-aware semantic web applications… (Haider Hadi Abbas)
2557
Figure 8. Components that used for building the context aware web semantic
application with internal and external ontology drivers
4. RESULTS
Based on the above semantic description, we have taken three context aware research journals for
the semanic web application into account i.e. ACM, IEEE, SpringerLink. We observed different paramenter in
result section seuch as publication type, publication year, number of contributors, types of method whether
analysis or evaluation for research publication using the context-aware semantic web in Table 2 with
the publication year, amount of context aware publication jounals and their percentage in Table 3 with
the number of contributors using the contecxt-aware semantic web techniques shown in Table 4 of the result
section. According to the results in Table 5 and Table 6, we find that the most accurate results after manual
investigation occur when the three-number context aware research journals with higher accuracy for the type
of evaluation and analysis method using Cubic-Web for the semantic analysis.
Table 2. Extracting the type of publications using the cubic-web semantic web techniques
Publication Type Amount Section (%)
Full research paper 15 17.44%
Introductory or work in progress 30 34.88%
Survey 1 1.16%
Tool demonstration 17 9.30%
Seminar 9 10.47%
Journal original research 20 23.26%
Journal survey 3 3.49%
TOTAL 95 100.00%
Table 3. Extracting the year of publications using the semantic web techniques
Publication Year Amount Section (%)
2010 3 3.49
2011 4 4.65
2012 9 10.47
2013 18 16.28
2014 11 12.79
2015 14 16.28
2016 15 11.63
2017 11 12.79
2018 10 11.63
TOTAL 95 100.00%
Table 4. Extracting the number of contributors using the contecxt-aware semantic web techniques
Contributors Amount Section (%)
From 1 to 10 4 4.65
From 11 to 20 7 8.14
From 21 to 30 2 3.45
From 31 to 40 0 0.00
From 41 to 50 3 2.38
Exceeding 50 0 0.00
TOTAL 16 19.55%
10. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 5, October 2020: 2549 - 2559
2558
Table 5. Extracting the evaluation method used for results using the contecxt-aware semantic web techniques
Type of Evaluation Amount Section (%)
Application 45 52.33%
Experiment with human subjects 14 16.28%
Conference 8 9.30%
Field experiment 6 6.98%
Action of pretending 6 6.98%
Case study 13 4.65%
Experiment with software subjects 2 2.33%
Discussion 1 1.16%
TOTAL 95 100.00%
Table 6. Extracting the analysis method used for results using the contecxt-aware semantic web techniques
Analysis Method Amount Section (%)
Qualitative analysis 15 17.45%
Content analysis 58 66.45%
Narative analysis 10 5.80%
Data analysis 12 9.30%
TOTAL 95 100.00%
We applied our experiments by setting the number of topics to ACM, IEEE and SpringerLink
respectively. When we classify our corpus to five topics, the returned results were general, i.e., not detailed
and the The average recall and average precision of three context aware research journals using cubic web for
semantic analysis as shown in Figures 9 and 10. For example, topics include terms such as information and
bacteria were in one topic (sciences), but when we classify our corpus to ten topics, terms such as information,
and bacteria were classified into two topics (Computers and Medical). This supports that our system is working
well since the ACM topics are also included in the IEEE. But when we classify our publications to SpringerLink
topics there were some redundant results. For example, the occurrence of the term “information” appeared in
the topics as computers, medical, and finance publications.
Figure 9. The average recall and average
precision of three context aware research
journals using cubic web for semantic analysis
Figure 10. The average recall and average precision
of three context aware research journals using cubic
web for semantic analysis
5. CONCLUSIONS
This research paper involves improving the results of previous work in topic modeling and query
expansion for publications in three context aware research journals i.e. ACM, IEEE and SpringerLink in
the open source dataset using the semantic web analysis with the help of Cubic-Web framework inpython
programming. Finally, description examples for the context provider service, service profile and service
process were demonstrated by using cubic web for semantic analysis in order to show the descriptions required
to implement the semantic web services of context-aware services for the purpose of extracting the research
papers from digital libraries. Weh have evaluated the context-aware model which can describe the semantics
of a published publication without explicit semantics. We have found the average recall and averge accuracy
for all the context aware research journals in our research work. Moreover, as this study is limited journal
documents, other future studies can be approached by examining different types of publications using this
advance research.
11. TELKOMNIKA Telecommun Comput El Control
Designing and configuring context-aware semantic web applications… (Haider Hadi Abbas)
2559
REFERENCES
[1] Henricksen, W. Karen, I. Jadwiga et al., "Modeling Context Information in Pervasive Computing Systems,” Pervasive
Computing, vol. 2, no. 4, pp. 263-277, 2017.
[2] P. Baldauf, F. Matthias, E. Schahram "A survey on context- aware systems,” International Journal of Ad Hoc and
Ubiquitous Computing, vol. 1, no. 14, pp. 13-17, 2016.
[3] P. Broens, P. Tom, A. Halteren et al., "Towards an application framework for context-aware m-health applications,"
International Journal of Internet Protocol Technology, vol. 12, no. 15, pp. 109-116, 2017.
[4] N. Ryan, J. Pascoe, and R. Morse, "Enhanced Reality Fieldwork: The Context-aware Archaeological Assistant,"
Computer Applications in Archaeology, vol. 3, no. 15, pp. 44-45, 2018.
[5] I. Abowd, S. Gregory, A. Dey, P. Brown, et al., "Towards a Better Understanding of Context and Context-Awareness
Handheld and Ubiquitous Computing,” HUC '99 Proceedings of the 1st international symposium on Handheld and
Ubiquitous Computing, Springer-Verlag London, UK, vol. 9, no. 14, pp. 13-19, 2019.
[6] P. Brown, D. Bovey, and C. Xian, "Context-aware applications: from the laboratory to the marketplace," IEEE
Personal Communications, vol. 4, no. 5, pp. 58-64, 2017.
[7] Y. Chen, Y. Guanling and D. Kotz, "A Survey of Context-Aware Mobile Computing Research," 2015. [Online].
Avaliable: www.cs.dartmouth.edu/reports/TR2000-381.pdf
[8] B. Schilit, N. Adams, and R. Want, et al., "Context-aware computing applications,” '94 Proceedings of the First
Workshop on Mobile Computing Systems and Applications, IEEE Computer Society Washington, DC, USA, vol. 4,
no. 04, pp. 55-56, 2019.
[9] Pascoe, et al., "Adding Generic Contextual Capabilities to Wearable Computers," Proceedings of the 2nd
IEEE
International Symposium on Wearable Computers, IEEE Computer Society, Washington, DC, USA, vol. 1, no. 15,
pp. 63-77, 2018.
[10] Y. Wang, X. JunPing, W. Qiliang, et al., "A Context-Aware Service Mediation of Service Delivery on Internet of
Things," Journal of Convergence Information Technology, vol. 7, no. 14, pp. 286-296, 2018.
[11] I. Wang, Y. Licai, R. Xiangwu, et al., "A Heuristic Approach to Social Network-based and Context-aware Mobile
Services Recommendation,” Journal of Convergence Information Technology, vol. 6, no. 10, pp. 339-346, 2015.
[12] Strang. and C. Linnhoff, et al., "A context modeling survey," Proceedings of the Workshop on Advanced Context
Modelling, Reasoning and Management as part of UbiComp, 2015.
[13] Y. Wang, Q. Zhang, et al., "Ontology based context modeling and reasoning using OWL,” Proceedings of the Second
IEEE Annual Conference, 2017.
[14] I. Román, Q. Manuel, C. Christopher, et al., “A Middleware Infrastructure for Active Spaces,” IEEE Pervasive
Computing, vol. 1, no. 4, pp. 74-83, 2019.
[15] Chen, F. Harry, L. Tim, et al., "An ontology for context-aware pervasive computing environments," The Knowledge
Engineering Review, vol. 18, no. 3, pp. 197-207, 2017.
[16] G. Tao, Y. Hung and Q. Zhang, et al., "A service-oriented middleware for building context-aware services," Journal
of Network and Computer Applications, vol. 28, no. 1, pp. 1-18, 2015.
[17] F. Izza and R. Patrick, "A Unified Framework for Enterprise Integration: An Ontology-Driven Service-Oriented
Approach," Proceedings of the First International Conference on Interoperability of Enterprise Software and
Applications, vol. 2, no. 4, pp. 63-77, 2015.
[18] I. Kopecky and E. Simperl, et al., "Semantic web service offer discovery for e-commerce," 10th international
conference on Electronic commerce, ACM, vol. 9, no. 8, pp. 563-527, 2016.
[19] Fensel, M. Kerrigan, and M. Zaremba, et al., “Implementing Semantic Web Services,” The SESA Framework
(Springer), 2015.
[20] McIlraith, C. Son, and L. Zeng, et al., "Semantic Web services,” IEEE Intelligent Systems, vol. 16, no. 2,
pp. 46-53, 2011.
[21] Martin and L. Burstein, et al., "OWL-S: Semantic Markup for Web Services", 2014. [Online]. Available:
http://www.w3.org/Submission/OWL-S/>
[22] V. Vitvar, T. Tomas, et al., "WSMO-Lite: lightweight semantic descriptions for services on the web," Fifth European
Conference on Web Services, vol. 2, no. 4, pp. 668-672, 2018.
[23] R. Hammami, H. Bellaaj and A. H. Kacem, "Semantic Web Services Discovery: A Survey and Research
Challenges," International Journal on Semantic Web and Information Systems, vol. 14, no. 4, pp. 57-72, 2018.
[24] G. Tian, et a.l, "Tagging augmented neural topic model for semantic sparse Web service discovery," Concurrency and
Computation: Practice and Experience, vol. 30, no. 12, 2018.
[25] I. Samoilenko, A. Yasseri, et al., “The distorted mirror of Wikipedia: A quantitative analysis of Wikipedia coverage
of academics,” EPJ Data Sci., vol. 1, no. 8, pp. 55-57, 2015.
[26] Machado, R. Rocha, et al., “Semantic Web or Web of Data? A diachronic study 2017 of the publications of Tim
Berners-Lee and the World Wide Web Consortium,” Journal of the Association for Information Science and
Technology, vol. 70, no. 7, pp. 701-714, 2019.