Today’s market evolution and high volatility of business requirements put an increasing emphasis on the
ability for systems to accommodate the changes required by new organizational needs while maintaining
security objectives satisfiability. This is all the more true in case of collaboration and interoperability
between different organizations and thus between their information systems. Ontology mapping has been
used for interoperability and several mapping systems have evolved to support the same. Usual solutions
do not take care of security. That is almost all systems do a mapping of ontologies which are unsecured.
We have developed a system for mapping secured ontologies using graph similarity concept.
OPTIMIZATION IN ENGINE DESIGN VIA FORMAL CONCEPT ANALYSIS USING NEGATIVE ATTR...csandit
There is an exhaustive study around the area of engine design that covers different methods that try to reduce costs of production and to optimize the performance of these engines.
Mathematical methods based in statistics, self-organized maps and neural networks reach the best results in these designs but there exists the problem that configuration of these methods is
not an easy work due the high number of parameters that have to be measured.
OPTIMIZATION IN ENGINE DESIGN VIA FORMAL CONCEPT ANALYSIS USING NEGATIVE ATTR...cscpconf
There is an exhaustive study around the area of engine design that covers different methods that try to reduce costs of production and to optimize the performance of these engines. Mathematical methods based in statistics, self-organized maps and neural networks reach the best results in these designs but there exists the problem that configuration of these methods is not an easy work due the high number of parameters that have to be measured. In this work we extend an algorithm for computing implications between attributes with positive and negative values for obtaining the mixed concepts lattice and also we propose a theoretical method based in these results for engine simulators adjusting specific and different elements for obtaining optimal engine configurations.
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...ijaia
Objects or structures that are regular take uniform dimensions. Based on the concepts of regular models,
our previous research work has developed a system of a regular ontology that models learning structures
in a multiagent system for uniform pre-assessments in a learning environment. This regular ontology has
led to the modelling of a classified rules learning algorithm that predicts the actual number of rules needed
for inductive learning processes and decision making in a multiagent system. But not all processes or
models are regular. Thus this paper presents a system of polynomial equation that can estimate and predict
the required number of rules of a non-regular ontology model given some defined parameters.
Entity Resolution is a problem that occurs in many information integration processes and applications. The primary goal of entity resolution is to resolve data references to the corresponding same real world entity. The ambiguity in references comes in various net- works such as social network, biological network, citation graphs and many others. Ambiguity in references not only leads to data redundancy but also inaccuracies in knowledge representation, extraction and query processing. Entity resolution is the solution to this problem. There have been many approaches such as pair-wise similarity over attributes of references, a parallel approach for morphing the graph data on to cluster of nodes (P-Swoosh) [2] and relational clustering that makes use of relational information in addition to the attribute similarity. In this article, we make use of relational culstering to resolve author name ambiguities in a subset of a real-world dataset: US patent network consisting of more than 650,000 author references.
Application for Logical Expression Processing csandit
Processing of logical expressions – especially a conversion from conjunctive normal form (CNF) to disjunctive normal form (DNF) – is very common problem in many aspects of information retrieval and processing. There are some existing solutions for the logical symbolic calculations, but none of them offers a functionality of CNF to DNF conversion. A new application for this purpose is presented in this paper.
OPTIMIZATION IN ENGINE DESIGN VIA FORMAL CONCEPT ANALYSIS USING NEGATIVE ATTR...csandit
There is an exhaustive study around the area of engine design that covers different methods that try to reduce costs of production and to optimize the performance of these engines.
Mathematical methods based in statistics, self-organized maps and neural networks reach the best results in these designs but there exists the problem that configuration of these methods is
not an easy work due the high number of parameters that have to be measured.
OPTIMIZATION IN ENGINE DESIGN VIA FORMAL CONCEPT ANALYSIS USING NEGATIVE ATTR...cscpconf
There is an exhaustive study around the area of engine design that covers different methods that try to reduce costs of production and to optimize the performance of these engines. Mathematical methods based in statistics, self-organized maps and neural networks reach the best results in these designs but there exists the problem that configuration of these methods is not an easy work due the high number of parameters that have to be measured. In this work we extend an algorithm for computing implications between attributes with positive and negative values for obtaining the mixed concepts lattice and also we propose a theoretical method based in these results for engine simulators adjusting specific and different elements for obtaining optimal engine configurations.
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...ijaia
Objects or structures that are regular take uniform dimensions. Based on the concepts of regular models,
our previous research work has developed a system of a regular ontology that models learning structures
in a multiagent system for uniform pre-assessments in a learning environment. This regular ontology has
led to the modelling of a classified rules learning algorithm that predicts the actual number of rules needed
for inductive learning processes and decision making in a multiagent system. But not all processes or
models are regular. Thus this paper presents a system of polynomial equation that can estimate and predict
the required number of rules of a non-regular ontology model given some defined parameters.
Entity Resolution is a problem that occurs in many information integration processes and applications. The primary goal of entity resolution is to resolve data references to the corresponding same real world entity. The ambiguity in references comes in various net- works such as social network, biological network, citation graphs and many others. Ambiguity in references not only leads to data redundancy but also inaccuracies in knowledge representation, extraction and query processing. Entity resolution is the solution to this problem. There have been many approaches such as pair-wise similarity over attributes of references, a parallel approach for morphing the graph data on to cluster of nodes (P-Swoosh) [2] and relational clustering that makes use of relational information in addition to the attribute similarity. In this article, we make use of relational culstering to resolve author name ambiguities in a subset of a real-world dataset: US patent network consisting of more than 650,000 author references.
Application for Logical Expression Processing csandit
Processing of logical expressions – especially a conversion from conjunctive normal form (CNF) to disjunctive normal form (DNF) – is very common problem in many aspects of information retrieval and processing. There are some existing solutions for the logical symbolic calculations, but none of them offers a functionality of CNF to DNF conversion. A new application for this purpose is presented in this paper.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGIJwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share
their interests without being at the same geographical location. With the great and rapid growth of Social
Media sites such as Facebook, LinkedIn, Twitter...etc. causes huge amount of user-generated content.
Thus, the improvement in the information quality and integrity becomes a great challenge to all social
media sites, which allows users to get the desired content or be linked to the best link relation using
improved search / link technique. So introducing semantics to social networks will widen up the
representation of the social networks.
DESIGN METHODOLOGY FOR RELATIONAL DATABASES: ISSUES RELATED TO TERNARY RELATI...ijdms
Entity-Relationship (ER) modeling plays a major role in relational database design. The data requirements
are conceptualized using an ER model and then transformed to relations. If the requirements are well
understood by the designer and then if the ER model is modeled and transformed to relations, the resultant
relations will be normalized. However the choice of modeling relationships between entities with
appropriate degree and cardinality ratio has a very severe impact on database design. In this paper, we
focus on the issues related to modeling binary relationships, ternary relationships, decomposing ternary
relationships to binary equivalents and transforming the same to relations. The impact of applying higher
normal forms to relations with composite keys is analyzed. We have also proposed a methodology which
database designers must follow during each phase of database design.
Metrics for Evaluating Quality of Embeddings for Ontological Concepts Saeedeh Shekarpour
Although there is an emerging trend towards generating embeddings for primarily unstructured data and, recently, for structured data, no systematic suite for measuring the quality of embeddings has been proposed yet.
This deficiency is further sensed with respect to embeddings generated for structured data because there are no concrete evaluation metrics measuring the quality of the encoded structure as well as semantic patterns in the embedding space.
In this paper, we introduce a framework containing three distinct tasks concerned with the individual aspects of ontological concepts: (i) the categorization aspect, (ii) the hierarchical aspect, and (iii) the relational aspect.
Then, in the scope of each task, a number of intrinsic metrics are proposed for evaluating the quality of the embeddings.
Furthermore, w.r.t. this framework, multiple experimental studies were run to compare the quality of the available embedding models.
Employing this framework in future research can reduce misjudgment and provide greater insight about quality comparisons of embeddings for ontological concepts.
We positioned our sampled data and code at https://github.com/alshargi/Concept2vec under GNU General Public License v3.0.
This Presentation is on recommended system on question paper predication using machine learning techniques. We did literature survey and implement using same technique.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANSijseajournal
ABSTRACT
In this paper we propose a novel method to cluster categorical data while retaining their context. Typically, clustering is performed on numerical data. However it is often useful to cluster categorical data as well, especially when dealing with data in real-world contexts. Several methods exist which can cluster categorical data, but our approach is unique in that we use recent text-processing and machine learning advancements like GloVe and t- SNE to develop a a context-aware clustering approach (using pre-trained
word embeddings). We encode words or categorical data into numerical, context-aware, vectors that we use to cluster the data points using common clustering algorithms like K-means.
IJWEST CFP (9).pdfCALL FOR ARTICLES...! IS INDEXING JOURNAL...! Internationa...dannyijwest
Paper Submission
Authors are invited to submit papers for this journal through Email: ijwestjournal@airccse.org / ijwest@aircconline.com or through Submission System.
mportant Dates
Submission Deadline : June 01, 2024
Notification :July 01, 2024
Final Manuscript Due : July 08, 2024
Publication Date : Determined by the Editor-in-Chief
Here's where you can reach us : ijwestjournal@yahoo.com or ijwestjournal@airccse.org or ijwest@aircconline.com
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGIJwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share
their interests without being at the same geographical location. With the great and rapid growth of Social
Media sites such as Facebook, LinkedIn, Twitter...etc. causes huge amount of user-generated content.
Thus, the improvement in the information quality and integrity becomes a great challenge to all social
media sites, which allows users to get the desired content or be linked to the best link relation using
improved search / link technique. So introducing semantics to social networks will widen up the
representation of the social networks.
DESIGN METHODOLOGY FOR RELATIONAL DATABASES: ISSUES RELATED TO TERNARY RELATI...ijdms
Entity-Relationship (ER) modeling plays a major role in relational database design. The data requirements
are conceptualized using an ER model and then transformed to relations. If the requirements are well
understood by the designer and then if the ER model is modeled and transformed to relations, the resultant
relations will be normalized. However the choice of modeling relationships between entities with
appropriate degree and cardinality ratio has a very severe impact on database design. In this paper, we
focus on the issues related to modeling binary relationships, ternary relationships, decomposing ternary
relationships to binary equivalents and transforming the same to relations. The impact of applying higher
normal forms to relations with composite keys is analyzed. We have also proposed a methodology which
database designers must follow during each phase of database design.
Metrics for Evaluating Quality of Embeddings for Ontological Concepts Saeedeh Shekarpour
Although there is an emerging trend towards generating embeddings for primarily unstructured data and, recently, for structured data, no systematic suite for measuring the quality of embeddings has been proposed yet.
This deficiency is further sensed with respect to embeddings generated for structured data because there are no concrete evaluation metrics measuring the quality of the encoded structure as well as semantic patterns in the embedding space.
In this paper, we introduce a framework containing three distinct tasks concerned with the individual aspects of ontological concepts: (i) the categorization aspect, (ii) the hierarchical aspect, and (iii) the relational aspect.
Then, in the scope of each task, a number of intrinsic metrics are proposed for evaluating the quality of the embeddings.
Furthermore, w.r.t. this framework, multiple experimental studies were run to compare the quality of the available embedding models.
Employing this framework in future research can reduce misjudgment and provide greater insight about quality comparisons of embeddings for ontological concepts.
We positioned our sampled data and code at https://github.com/alshargi/Concept2vec under GNU General Public License v3.0.
This Presentation is on recommended system on question paper predication using machine learning techniques. We did literature survey and implement using same technique.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANSijseajournal
ABSTRACT
In this paper we propose a novel method to cluster categorical data while retaining their context. Typically, clustering is performed on numerical data. However it is often useful to cluster categorical data as well, especially when dealing with data in real-world contexts. Several methods exist which can cluster categorical data, but our approach is unique in that we use recent text-processing and machine learning advancements like GloVe and t- SNE to develop a a context-aware clustering approach (using pre-trained
word embeddings). We encode words or categorical data into numerical, context-aware, vectors that we use to cluster the data points using common clustering algorithms like K-means.
IJWEST CFP (9).pdfCALL FOR ARTICLES...! IS INDEXING JOURNAL...! Internationa...dannyijwest
Paper Submission
Authors are invited to submit papers for this journal through Email: ijwestjournal@airccse.org / ijwest@aircconline.com or through Submission System.
mportant Dates
Submission Deadline : June 01, 2024
Notification :July 01, 2024
Final Manuscript Due : July 08, 2024
Publication Date : Determined by the Editor-in-Chief
Here's where you can reach us : ijwestjournal@yahoo.com or ijwestjournal@airccse.org or ijwest@aircconline.com
Cybercrimes in the Darknet and Their Detections: A Comprehensive Analysis and...dannyijwest
Although the Dark web was originally used for maintaining privacy-sensitive communication for business or intelligence services for defence, government and business organizations, fighting against censorship and blocked content, later, the advantage of technologies behind the Dark web were abused by criminals to conduct crimes which involve drug dealing to the contract of assassinations in a widespread manner. Since the communication remains secure and untraceable, criminals can easily use dark web service via The Onion Router (TOR), can hide their illegal motives and can conceal their criminal activities. This makes it very difficult to monitor and detect cybercrimes over the dark web. With the evolution of machine learning, natural language processing techniques, computational big data applications and hardware, there is a growing interest in exploiting dark web data to monitor and detect criminal activities. Due to the anonymity provided by the Dark Web, the rapid disappearance and the change of the uniform resource locators (URLs) of the resources, it is not as easy to crawl the Drak web and get the data as the usual surface web which limits the researchers and law enforcement agencies to analyse the data. Therefore, there is an urgent need to study the technology behind the Dark web, its widespread abuse, its impact on society and the existing systems, to identify the sources of drug deal or terrorism activities. In this research, we analysed the predominant darker sides of the world wide web (WWW), their volumes, their contents and their ratios. We have performed the analysis of the larger malicious or hidden activities that occupy the major portions of the Dark net; tools and techniques used to identify cybercrimes which happen inside the dark web. We applied a systematic literature review (SLR) approach on the resources where the actual dark net data have been used for research purposes in several areas. From this SLR, we identified the approaches (tools and algorithms) which have been applied to analyse the Dark net data, the key gaps as well as the key contributions of the existing works in the literature. In our study, we find the main challenges to crawl the dark web and collect forum data are: scalability of crawler, content selection trade off, and social obligation for TOR crawler and the limitations of techniques used in automatic sentiment analysis to understand criminals’ forums and thereby monitor the forums. From the comprehensive analysis of existing tools, our study summarizes the most tools. However the forum topics rapidly change as their sources changes; criminals inject noises to obfuscate the forum’s main topic and thus remain undetectable. Therefore supervised techniques fail to address the above challenges. Semi-supervised techniques would be an interesting research direction.
FFO: Forest Fire Ontology and Reasoning System for Enhanced Alert and Managem...dannyijwest
Forest fires or wildfires pose a serious threat to property, lives, and the environment. Early detection and mitigation of such emergencies, therefore, play an important role in reducing the severity of the impact caused by wildfire. Unfortunately, there is often an improper or delayed mechanism for forest fire detection which leads to destruction and losses. These anomalies in detection can be due to defects in sensors or a lack of proper information interoperability among the sensors deployed in forests. This paper presents a lightweight ontological framework to address these challenges. Interoperability issues are caused due to heterogeneity in technologies used and heterogeneous data created by different sensors. Therefore, through the proposed Forest Fire Detection and Management Ontology (FFO), we introduce a standardized model to share and reuse knowledge and data across different sensors. The proposed ontology is validated using semantic reasoning and query processing. The reasoning and querying processes are performed on real-time data gathered from experiments conducted in a forest and stored as RDF triples based on the design of the ontology. The outcomes of queries and inferences from reasoning demonstrate that FFO is feasible for the early detection of wildfire and facilitates efficient process management subsequent to detection.
Call For Papers-10th International Conference on Artificial Intelligence and ...dannyijwest
** Registration is currently open **
Call for Research Papers!!!
Free – Extended Paper will be published as free of cost.
10th International Conference on Artificial Intelligence and Applications (AI 2024)
July 20 ~ 21, 2024, Toronto, Canada
https://csty2024.org/ai/index
Submission Deadline: May 11, 2024
Contact Us
Here's where you can reach us : ai@csty2024.org or ai.conference@yahoo.com
Submission System
https://csty2024.org/submission/index.php
#artificialintelligence #softcomputing #machinelearning #technology #datascience #python #deeplearning #tech #robotics #innovation #bigdata #coding #iot #computerscience #data #dataanalytics #engineering #robot #datascientist #software #automation #analytics #ml #pythonprogramming #programmer #digitaltransformation #developer #promptengineering #generativeai #genai #chatgpt
CALL FOR ARTICLES...! IS INDEXING JOURNAL...! International Journal of Web &...dannyijwest
Paper Submission
Authors are invited to submit papers for this journal through Email: ijwest@aircconline.com or through Submission System. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Important Dates
• Submission Deadline: March 16, 2024
• Notification : April 13, 2024
• Final Manuscript Due : April 20, 2024
• Publication Date : Determined by the Editor-in-Chief
Contact Us
Here's where you can reach us
ijwestjournal@yahoo.com or ijwestjournal@airccse.org or ijwest@aircconline.com
Submission URL : https://airccse.com/submissioncs/home.html
ENHANCING WEB ACCESSIBILITY - NAVIGATING THE UPGRADE OF DESIGN SYSTEMS FROM W...dannyijwest
ENHANCING WEB ACCESSIBILITY - NAVIGATING THE UPGRADE OF DESIGN SYSTEMS FROM WCAG 2.0 TO WCAG 2.1
Hardik Shah
Department of Information Technology, Rochester Institute of Technology, USA
ABSTRACT
In this research, we explore the vital transition of Design Systems from Web Content Accessibility
Guidelines (WCAG) 2.0 to WCAG 2.1, emphasizing its role in enhancing web accessibility and inclusivity
in digital environments. The study outlines a comprehensive strategy for achieving WCAG 2.1 compliance,
encompassing assessment, strategic planning, implementation, and testing, with a focus on collaboration
and user involvement. It also addresses the challenges in using web accessibility tools, such as their
complexity and the dynamic nature of accessibility standards. The paper looks forward to the integration
of emerging technologies like AI, ML, NLP, VR, and AR in accessibility tools, advocating for universal
design and user-centered approaches. This research acts as a crucial guide for organizations aiming to
navigate the changing landscape of web accessibility, underscoring the importance of continuous learning
and adaptation to maintain and enhance accessibility in digital platforms.
KEYWORDS
Web accessibility, WCAG 2.1, Design Systems, Web accessibility tools, Artificial Intelligence
PDF LINK:https://aircconline.com/ijwest/V15N1/15124ijwest01.pdf
VOLUME LINK:https://www.airccse.org/journal/ijwest/vol15.html
OTHER INFORMATION:https://www.airccse.org/journal/ijwest/ijwest.html
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
1. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.4, October 2012
DOI : 10.5121/ijwest.2012.3408 83
SECURED ONTOLOGY MAPPING
Manjula Shenoy.K1
,Dr.K.C.Shet2
,Dr. U.Dinesh Acharya1
1
Department of Computer Engineering, Manipal University, MIT, Manipal
manju.shenoy@manipal.edu, dinesh.acharya@manipal.edu
2
Department of Computer Engineering,NITK,Suratkal
kcshet@rediffmail.com
ABSTRACT
Today’s market evolution and high volatility of business requirements put an increasing emphasis on the
ability for systems to accommodate the changes required by new organizational needs while maintaining
security objectives satisfiability. This is all the more true in case of collaboration and interoperability
between different organizations and thus between their information systems. Ontology mapping has been
used for interoperability and several mapping systems have evolved to support the same. Usual solutions
do not take care of security. That is almost all systems do a mapping of ontologies which are unsecured.
We have developed a system for mapping secured ontologies using graph similarity concept. Here we give
no importance to the strings that describe ontology concepts ,properties etc. Because these strings may be
encrypted in the secured ontology. Instead we use the pure graphical structure to determine mapping
between various concepts of given two secured ontologies. The paper also gives the measure of accuracy
of experiment in a tabular form in terms of precision, recall and F-measure.
KEYWORDS
Ontology, Ontology mapping, Security ,Interoperability.
1. INTRODUCTION
Researchers have developed several tools that enable organizations to share information, largely,
most of these have not taken into the account the necessity of maintaining privacy and
confidentiality of data and metadata of the organizations who want to share information.
Consider the scenario of two different country military wanting to share information about a
mission at hand while preserving the privacy of their systems. To the best of our knowledge
current systems do not allow this type of information sharing.
Need for secured information sharing also exists for intra organizational information sharing too.
Within the organizations different departments may use different systems which are
autonomously constructed . The secure interoperability may be required here too.
Privacy should be maintained for both data and metadata. Metadata describes how data is
organized (data schema) , how access are controlled in the organization( the internal access
control policy and role hierarchies) and the semantics of the data used in the
organization(ontology).
Organizations looking to interoperate are largely using metadata like ontologies to capture the
semantics of the terms used in the information sources maintained by the organizations. Normally
it has been assumed that these ontologies will be published by the organizations. Published
ontologies from different organizations are mapped and matching rules are generated. Queries to
information sources are rewritten using these matching rules so that vocabulary used in the query
matches with the vocabulary of information source.
2. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.4, October 2012
84
Unlike the traditional way some organizations may not like to publish their metadata or share it
with other external users. Yet they want interoperation. In this case the privacy of the metadata
must be preserved. The external user should not have access to ontologies in cleartext. So
ontologies may be encrypted and then published. The mapping system should now be able to
recognize mapping in this encrypted ontology. Here we present one such system.
2. RELATED WORK
The present ontology mapping systems can be classified into the following categories.
1. Word Similarity based: Here matching is performed based on similarity of words describing
concepts, properties or names of concepts and properties occurring in the ontology.[4]
2.Structure based: Here structure of ontologies has been used for matching concepts.[5][6][7].
3. Instance based: These take the instances under concepts to find matching.[8]. These methods
are further subdivided into Opaque and pattern based. In Opaque instance matching we use
statistical properties like distribution ,entropy and mutual information etc. In Pattern based
method instance pattern are matched.
4. Inference Based: The semantics of concepts under ontologies are expressed as rules in a logical
language and then the matching is performed using an inference engine.
There are also hybrid algorithms for matching ontologies.
[1] discusses need for secured data sharing in or among organization and [2] explains need for
secured data mining. [3] proposes two methods for privacy preserving ontology matching. One
of which is semi-automatic. And the other requires the dictionaries or thesauri or corpuses to be
encrypted. Our method falls purely under structure based ontology matching which can be
applied to encrypted ontologies. [4] defines a graph matching technique we used, in the
literature.
3. GRAPH MATCHING TECHNIQUE USED
3.1. Generalizing hubs and authorities[17]
Efficient web search engines such as Google are often based on the idea of characterizing the
most important vertices in a graph representing the connections or links between pages on the
web. One such method, proposed by Kleinberg [16], identifies in a set of pages relevant to a
query search the subset of pages that are good hubs or the subset of pages that are good
authorities. For example, for the query “university,” the home-pages of Oxford, Harvard, and
other universities are good authorities, whereas web-pages that point to these home-pages are
good hubs. Good hubs are pages that point to good authorities, and good authorities are pages that
are pointed to by good hubs. From these implicit relations, Kleinberg derives an iterative method
that assigns an “authority score” and a “hub score” to every vertex of a given graph. These scores
can be obtained as the limit of a converging iterative process, which is described in section below.
Let G = (V,E) be a graph with vertex set V and with edge set E and let hj and aj be the hub and
authority scores of vertex j. We let these scores be initialized by some positive values and then
update them simultaneously for all vertices according to the following mutually reinforcing
relation: the hub score of vertex j is set equal to the sum of the authority scores of all vertices
pointed to by j, and, similarly, the authority score of vertex j is set equal to the sum of the hub
scores of all vertices pointing to j:
3. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.4, October 2012
85
Let B be the matrix whose entry (i, j) is equal to the number of edges between the vertices i and j
in G (the adjacency matrix of G), and let h and a be the vectors of hub and authority scores. The
above updating equations then take the simple form
which we denote in compact form by
Where
Notice that the matrix M is symmetric and nonnegative. We are interested only in the relative
scores and we will therefore consider the normalized vector sequence
Where||..||2 is the Euclidean vector norm. Notice that the above matrix M has the property that
and from this equality it follows that, if the dominant invariant subspaces associatedwith BBT
and
BT
B have dimension 1, then the normalized hub and authority scores are simply given by the
normalized dominant eigenvectors of BBT
and BT
B. This is the definition used in [16] for the
authority and hub scores of the vertices of G. The arbitrary choice of z0 = 1 made in [16] is shown
here to have an extrenal norm justification. Notice that when the invariant subspace has
dimension 1, then there is nothing particular about the starting vector 1, since any other positive
vector z0 would give the same result. We now generalize this construction. The authority score of
vertex j of G can be thought of as a similarity score between vertex j of G and vertex authority of
the graph
hubauthority
and, similarly, the hub score of vertex j of G can be seen as a similarity score between vertex j
and vertex hub. The mutually reinforcing updating iteration used above can be generalized to
graphs that are different from the hub–authority structure graph.
The idea of this generalization is easier to grasp with an example; we illustrate it first on the path
graph with three vertices and then provide a definition for arbitrary graphs. Let G be a graph with
edge set E and adjacency matrix B and consider the structure graph
123
4. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.4, October 2012
86
With each vertex j of G we now associate three scores xi1, xi2, and xi3, one for each vertex of the
structure graph. We initialize these scores with some positive value and then update them
according to the following mutually reinforcing relation:
or, in matrix form (we denote by xj the column vector with entries xij ),
which we again denote xk+1 = Mxk. The situation is now identical to that of the previous example
and all convergence arguments given there apply here as well. We now come to a description of
the general case. Assume that we have two directed graphs GA and GB with nA and nB vertices
and edge sets EA and EB. We think of GA as a structure graph that plays the role of the graphs
hub −→ authority
and 1 −→ 2 −→ 3 in the above examples. We consider real scores xij for i = 1, . . . , nB
and j = 1, . . . , nA and simultaneously update all scores according to the following updating
equations:
This equation can be given an interpretation in terms of the product graph of GA and GB. The
product graph of GA and GB is a graph that has nA.nB vertices and that has an edge between
vertices (i1, j1) and (i2, j2) if there is an edge between i1 and i2 in GA and there is an edge
between j1 and j2 in GB. The above updating equation is then equivalent to replacing the scores
of all vertices of the product graph by the sum of the scores of the vertices linked by an outgoing
or incoming edge. Equation can also be written in more compact matrix form. Let Xk be the nB ×
nA matrix of entries xij at iteration k. Then the updating equations take the simple form
where A and B are the adjacency matrices of GA and GB. This equation is further revised by
Laure Ninove [18] as follows Where XK is replaced by Sk
BSkAt
+Bt
SkA
|| BSkAt
+Bt
SkA||
5. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.4, October 2012
87
4.SECURED ONTOLOGY MATCHING.
4.1. Graph Similarity Measure Matrix
First we explain the graph matching technique we used. Consider the two graphs Ga and Gb
shown in Figure 1. Suppose we want to match vertex 1 of Ga with vertex 4 of Gb , we need to
find how much similar the vertices 2 of Ga and 2 of Gb , and 2 of Ga and 1 of Gb.
Figure 1. Graphs to be matched
If A is the adjacency matrix of Ga and B is the adjacency matrix of Gb and S is the similarity
matrix defined as follows between vertices we can get the total similarity matrix between
individual vertices can be calculated using the formula
BSAt
+Bt
SA
|| BSAt
+Bt
SA||
Here At
stands for transpose of A. S is the initial similarity matrix. The size of S is nXm.
Where m is number of concepts in first ontology and n is number of ontology concepts in second.
The secured mapping method generates adjacency matrices based on hierarchical relationship of
concepts of the encrypted ontologies as per the following algorithm.
Algorithm 1. Generating Adjacency matrix for the encrypted ontology given
Let O be the ontology given and A for adjacency matrix. If n is the number of concepts in
ontology O then A has order nXn.
1. Initialize A [i][j]=0 for all i and j between 0 and n.
2. For i= 1to n
Begin
Str=get ith
concept of O
Collection = get all super classes of Str.
For each Object x in the Collection
Begin
For j = 1 to n
If jth concept of O matches with x then A[i][j]=1;
End
End
2
3 4
1
2 3
1
3
Ga Gb
S(Ga1,Gb4) = s(Ga2,Gb1)+s(Ga2,Gb2)
6. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.4, October 2012
88
S is the unity matrix initially.
4.2. DegreeDifference Similarity(DDS) Matrix
The degree of a node is the number of edges connected to this node. In the algorithm we first
compute SSNdegree(sum of self and neighbor degree) of every node. This is the sum of the nodes
degree plus its neighbors degree. In figure 1 For Ga, SSNdegree for node 1 is 2+2+2=6. For the
nodes to be mapped we find difference between SSNdegree and subtract it from maximum degree
of the graph and call it Degree Difference Similarity value. For whole matching problem these
values become part of the similarity matrix.
4.3. NodeAttributeSimilarity (NAS)Matrix
This indicates how many neighbours of node 1 match neighbors of node 2. For each neighbor of
node1 , we examine which neighbor of node 2 can be mapped based on same attribute name. If
such a mapping is present NAS (initialized to 0) is incremented by 1. For every node pairs we
compute NAS and express it as a matrix for whole matching problem.
4.4. Edge AttributeSimilarity(EAS)Matrix
This is similar to NAS. Rather than comparing nodes we compare edges here. i.e. For each edge
of node1 we try to map a corresponding edge of node 2 which has the same attribute. When EAS
is computed for every pair of nodes in two graphs we get a matrix.
4.5. Bayesian Belief Network
Here We describe an approach for ontology matching using Bayesian network. Our approach
described here does not use Bayesian network to detect mappings. Instead we apply network to
learn relationship between different similarity measures treated as different mapping methods
and then to choose the best mapping. BBN’s considered are assumed to contain nodes one per
similarity measure and one output node representing the final output. This will allow us not only
to combine the methods (in the probabilistic framework) but also to talk about conditionally
independent methods, a minimal required subset of methods and the like. The input to the process
of BBN training for ontology mapping are positive and negative examples with results of
individual methods. The positive examples correspond to pairs for which mapping has previously
been established, while the negative ones are (all or a subset of) pairs that have been identified as
non-matching. Then CPTs and the structure are learnt using famous K2 algorithm. In the phase
of using the trained BBN, the mapping justifications for unseen cases (pairs of concepts) are
counted and inserted into the BBN as evidence. The result of alignment is calculated via
propagation of this evidence.
4.6. K2 Algorithm[19]
This is to find the most probable Bayes-network structure given a database.
D – a database of cases Z – the set of variables represented by D , Bsi , Bsj – two bayes network
structures containing exactly those variables that are in Z. By computing such ratios for pairs of
bayes network structures, we can rank order a set of structures by their posterior probabilities.
Based on four assumptions, the paper introduces an efficient formula for computing P(Bs,D), let
B represent an arbitrary bayes network structure containing just the variables in D.
Assumption 1: The database variables, which we denote as Z, are discrete
Assumption 2: Cases occur independently, given a bayes network model
7. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.4, October 2012
89
Assumption 3: There are no cases that have variables with missing values
Assumption 4: The density function f(Bp|Bs) is uniform. Bp is a vector whose values denotes the
conditional-probability assignment associated with structure Bs
D - dataset, it has m cases(records)
Z - a set of n discrete variables: (x1, …, xn)
ri - a variable xi in Z has ri possible value assignment:vi1..viri
Bs - a bayes network structure containing just the variables in Z
πi - each variable xi in Bs has a set of parents which we represent with a list of variables πi
qi - there are has unique instantiations of πi
wij - denote jth unique instantiation of πi relative to D.
Nijk - the number of cases in D in which variable xi has the value of
and πi is instantiated as wij.
Three more assumptions to decrease the computational complexity to polynomial-time:
<1> There is an ordering on the nodes such that if x
i
precedes x
j
, then we do not allow structures
in which there is an arc from x
j
to x
i
.
<2> There exists a sufficiently tight limit on the number of parents of any nodes
<3> P(π
i
→ x
i
) and P(π
j
→ x
j
) are independent when i≠ j.
Use the following functions:
Where the Nijk are relative to πi being the parents of xi and relative to a database D
Pred(xi) = {x1, ... xi-1}
It returns the set of nodes that precede xi in the node ordering.
5. RESULTS
The evaluation of the proposed system above is carried out for OAEI systematic benchmark suite.
Since we compare for equality of names, and give importance to structure we need not encrypt
the ontology for study of evaluation measures. The evaluation measures we considered are
Precision, Recall and F-measure. Precision gives the ratio of correctly found correspondences
over the total number of returned correspondences. If R is the reference alignment and A is the
found alignment then the ratio for precision is
)
,
(
)
,
(
)
(
)
,
(
)
(
)
,
(
)
|
(
)
|
(
D
B
P
D
B
P
D
P
D
B
P
D
P
D
B
P
D
B
P
D
B
P
Sj
Si
Sj
S
Sj
S
i
i
=
=
!
)!
1
(
)!
1
(
)
(
)
,
(
1
1 1
∏
∏∏ =
= = −
+
−
=
i
i r
k
ijk
ij
n
i
q
j i
ij
i
s
s N
N
r
N
r
B
P
D
B
P
∑
=
=
i
r
k
ijk
ij N
N
1
!
)!
1
(
)!
1
(
)
(
[
)]
,
(
[
max
1
1 1
∏
∏ ∏ =
= = −
+
−
→
=
i
i
s
r
k
ijk
ij
n
i
q
j i
ij
i
i
i
s
B
N
N
r
N
r
x
P
D
B
P
!
)!
1
(
)!
1
(
)
,
(
1
1
∏
∏ =
= −
+
−
=
i
i r
k
ijk
q
j i
ij
i
i N
r
N
r
i
g
8. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.4, October 2012
90
Recall is the ratio of correctly found correspondences to the total number of expected
correspondences. The formula is
The following formula is used for finding F-measure.
Here α is between 0 and 1. If α is 1 F-measure is same as precision otherwise if it is 0 then F-
measure is same as recall. Usually it is taken as 0.5.
Table 1 gives the dataset and the results of experiments in terms of evaluation measures stated
above.
Table 1
Benchmark test no Precision Recall F-measure
1xx 1 0.9 0.95
2xx 1 0.89 0.94
3xx 0.9 0.87 0.88
6. CONCLUSIONS
Maintaining privacy in interoperation systems is becoming increasingly important. Ontology
matching is the primary means of resolving semantic heterogeneity. Ontology matching helps
establish semantic correspondence rules that are used for query rewriting and translation in
interoperation systems. For information systems that want maximum privacy, the privacy of their
ontologies must be maintained. Our system gives a method to map ontologies which are secured.
REFERENCES
[1] C. Clifton, A. Doan, M. Kantarcioglu, G. Schadow, J. Vaidya, A. Elmagarmid, D. Suciu, “Privacy
preserving data integration and sharing” In Proc. DMKD’2004.
[2] R. Agrawal, R. Srikant, “Privacy preserving data mining” In Proc. SIGMOD’2000.
[3] P. Mitra, P. Liu, C.C. Pan, “Privacy preserving ontology matching” In Proc. AAAI Workshop,2005.
[4] J. Li,” LOM:Lexicon based ontology mapping tool” In Proc. PerMIS’2004.
[5] N.F. Noy, M.A. Musen, “Anchor-Prompt: Using non local context for semantic matching”In Proc.
IJCAI-2001.
[6] N.F. Noy, M.A. Musen, “The Prompt Suite: Interactive tools for ontology mapping and merging”
International Journal of Human Computer Studies, 59(6),2003.
[7] S.Melnik ,H.G. Molina, E.Rahm, “Similarity flooding a versatile graph matching algorithm and its
application to schema matching” In Proc. ICDE’2002.
[8] A. Doan, J. Madhavan, P. Domingos, A. Halevy, “Learning to map between ontologies on the
semantic web” In Proc WWW’2002.
9. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.4, October 2012
91
[9] S.Melnik ,H.G. Molina, E.Rahm, “Similarity flooding a versatile graph matching algorithm ” In Proc.
ICDE’2007.
[10] N. Choi, I.Y. Song, H. Han, “A survey on ontology mapping” SIGMOD RECORD’2006.
[11] P. Shvaiko, J. Euzenat “Ten Challenges for ontology matching” In Proc. ICODAS’2008
[12] Y. Kalfoglou, M. Schorelmmer “Ontology mapping :The State of the Art” The Knowledge
Engineering Review 18(1),2003.
[13] M. Ehrig, S. Staab, “QOM: Quick Ontology mapping” GI Jahrestagung(1)’2004.
[14] E. Rahm, P. Bernstein, “ A survey of approaches to automatic schema matching”, The VLDB Journal,
10(4), 2001.
[15] P. Shvaiko, J. Euzenat,” A survey of schema-based matching approaches.”, Journal on Data
Semantics, LNCS 3720 (JoDS IV), 2005.
[16] J.M. Kleinberg “Authoritive sources in a hyper linked environment”,Journal of ACM 1999.
[17] Blondel Van Dooren et.al. “A measure of similarity between graph vertices:Application to synonym
extraction and we searching”,SIAM review 2004.
[18] Laure Ninove et.al.”Graph similarity algorithms” Seminar Presented at Department of Mathematical
Engineering,University of Catholique de Louvain. 2007.
[19] Cooper GF, Herskovits EH. The induction of probabilistic networks from data. Machine Learning
1992.
Authors
Manjula Shenoy.K is currently working as a Assistant professor (Sel.Grade) at CSE
Department, Manipal Institute of Technology, Manipal University,Manipal.
Dr. K.C. Shet is a Professor in Department of Computer Engineering, National Institute of Technology
,Suratkal. He has several Journal and Conference papers to his credit.
Dr. U.Dinesh Acharya is Professor and Head of the department of Computer Science
and Engineering, Manipal Institute of Technology , Manipal University,Manipal. He
has several Journal and Conference papers to his credit.