Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide


  1. 1. Torky I. Sultan, Mona M. Nasr, Ayman E. Khedr, Walaa S. Ismail / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.766-773 Semantic Conflicts Reconciliation (SCR): A Framework for Detecting and Reconciling Data-Level Semantic Conflicts. Torky I. Sultan, Mona M. Nasr, Ayman E. Khedr, Walaa S. Ismail Faculty of Computers and Information, Information Systems Department, Helwan UniversityAbstract Integration of heterogeneous data sources We can classify semantic conflicts to data-levelis not an easy task that involves reconciliation of conflicts and schema-level conflicts.various levels of conflicts. Before we can integrate  Data –level conflicts: conflicts that arise at thethe heterogeneous data, we need to resolve these data level (instance level) that related to differentheterogeneity conflicts. Semantic conflict, if representation or interpretation of data valuesundetected, can lead to disastrous results in even among different sources.the simplest information system. In this paper, we  Schema –level Conflicts: conflicts arise at therecommend system architecture to solve the Schema level when schemas use differentsemantic data level conflicts that related to alternatives or definitions to describe the samedifferent representation or interpretation of data information.contexts among different sources and receivers. In As we expected no single organization has completethe proposed ontology-based approach, all data information, intelligence information usuallysemantics explicitly described in the knowledge gathered from different organizations in differentrepresentation phase and automatically taken into countries, so integration is necessary to performaccount by Interpretation Mediation Services various intelligence analyses and for decision-phase so conflicts can automatically detect and making. Different challenges appear when differentresolved at the query runtime. Data sources still agencies organize and report information usingindependent from the integration process that is different interpretations. In order to illustrate themean we can retrieve up to date data and challenges of integrating information from differentsmoothly update the data in each data source information sources, let us consider a simplewithout affecting the framework. integration scenario that involves many of the Keywords: Data Integration, Heterogeneous following data elements [2].Suppose we have 150Databases, Interoperability, Ontology, Semantic agencies and each one may use different contexts.Heterogeneity. The varieties of these contexts are summarized in Table (1) . Now we have 1,536 (i.e., 4*3*2*4*4*4) I. Introduction combinations from these varieties. We use the term Information integration and retrieval contexts to refer to these different ways ofassumed to be very different today, with the growth representing and interpreting data. We can considerof the Internet and web-based resources, the presence that each of the 150 data sources uses a differentof structured, semi-structured, and unstructured data context as its data representation convention.- all of have added new dimensions to the problem ofinformation retrieval and integration as known Table 1: Semantic Differences in Data Sources [2].earlier. Information integration and the relatedsemantic interoperability are becoming an evengreater concern. Integration of information considersfrom the most important factors able to build a large-scale business applications such as enterprise-widedecision support systems. The main goal ofinformation integration is to combine selectedsystems to a unified new whole and give users theability to interact with one another through thisunified view. It requires a framework for storing An analyst from any of the 150 agencies may needmetadata, and tools that make it easy to manipulate information from all the other agencies forthe semantic heterogeneity between sources and intelligence analysis purposes . As shown in Tablereceivers. (1), when information from other agencies is notThere are different views about classification of converted into the analyst’s context, it will besemantic conflicts, as in [1] difficult to identify important patterns. Therefore, a total of 22,350 (i.e., 150*149) conversion programs would be required to convert data from any source’s 766 | P a g e
  2. 2. Torky I. Sultan, Mona M. Nasr, Ayman E. Khedr, Walaa S. Ismail / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.766-773 context to any other source’s context, and vice versa. Implementing tens of thousands of data conversions is not an easy task; but maintaining them to cope with changes in data sources and receiver requirements over time is even more challenging [2]. II. Related Work Achieving semantic interoperability among Fig. 1: Traditional approaches to Semantic heterogeneous and disparate information sources has Interoperability [4]. been a critical issue within the database community for the past two decades [1]. Technologies already Global Data Standardization (GS) exist to overcome heterogeneity in hardware, If we could develop and maintain a single software, and syntax used in different systems (e.g., data standard that defines a set of concepts and the ODBC standard, XML based standards, web specifies the corresponding representation, all services and SOA-Service Oriented Architectures) semantic differences would disappear and there While these capabilities are essential to information would be no need for data conversion. Unfortunately, integration, they do not address the issue of such standardization is usually infeasible in practice heterogeneous data semantics that exist both within for several reasons. There are legitimate needs for and across enterprises [2]. We can manage such having different definitions for concepts, storing and heterogeneities by adopting certain standards that reporting data in different formats. Most integration would eliminate semantic heterogeneity in the and information exchange efforts involve many sources all together or by developing and existing systems, agreeing to a standard often means maintaining all necessary conversions for reconciling someone has to change his/her current semantic differences. implementation, which creates obstacles and makes We can achieve semantic interoperability in a the standard development and enforcement extremely number of ways. We discuss the traditional difficult [3]. approaches to Semantic Interoperability and the most important ontology-based systems for data Interchange Data Standardization (IS) integration. The data exchange systems sometimes can agree on the data to be exchanged, i.e., standardizingIII. Traditional Approaches to Semantic a set of concepts as well as their interchange formats. Interoperability Brute-force Data The underlying systems do not need to store the data Conversions (BF) according to the standard; it suffices as long as each In the Brute-force Data Conversions (BF) data sender generates the data according to the approach all necessary conversions implemented standard. That is, this approach requires that each with hand-coded programs. for example, if we have system have conversions between its local data and N data sources and receivers, N (N-1) such an interchange standard used for exchanging data conversions need to be implemented to convert the with other systems. Thus, each system still maintains sources context to the receiver context. These its own autonomy. This is different from the global conversions become costly to implement and very data standardization, where all systems must store difficult to maintain When N is large. This is a labor- data according to a global standard. With N systems intensive process; nearly 70% of integration costs exchanging information, the Interchange come from the implementation of these data Standardization approach requires 2N conversions. conversion programs. A possible variation of the The IS approach is a significant improvement over (BF) approach is to group sources that share the same the brute-force approach that might need to set of semantic assumptions into one context. The implement conversions between every pair of approach allows multiple sources in the same context systems [4]. Although this approach has certain to share the same conversion programs, so the advantages, it also has several serious limitations [2]. numbers of conversion programs will be reduced. From which, all parties should reach an agreement on We refer to the original approach and this variation the data definition and data format. Reaching such an as BFS and BFC, respectively [2]. These approaches agreement can be a costly and time-consuming are illustrated schematically in Fig 1. process besides; any change to the interchange standard affects all systems and the existing conversion programs. Lastly, the approach can involve many unnecessary data conversions 767 | P a g e
  3. 3. Torky I. Sultan, Mona M. Nasr, Ayman E. Khedr, Walaa S. Ismail / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.766-773IV. Ontology-Based Data Integration ontology approach [6]. In order to overcome the Approaches problems of semantics heterogeneity, KRAFTMost of the shortcomings of the previous traditional defines two kinds of ontologies: a local ontology andapproaches can be overcome by declaratively a shared ontology. For each knowledge source theredescribing data semantics explicitly and separating is a local ontology. The shared ontology formallyknowledge representation from conversion programs defines the terminology of the domain problem in(implementation). We will explain the most order to avoid the ontology mismatches that mightimportant ontology-based systems for data occur between a local ontology and the sharedintegration, which are SIMS, OBSERVER, KRAFT, ontology [10]. any modification or addition in anySCROL and COIN with respect to the role and use of source, require changes in the local ontology, whichontologies. represents this source, and the mappings between the local and the shared ontology have to be performedSIMS is based on a wrapper/mediator, that is, each [8].information source is accessed through a wrapper[5]. The SIMS mediator component is used to unify SCROL is a global schema approach that uses anthe various available information sources and to ontology to explicitly categorize and representprovide the terminology for accessing them. The core predetermined types of semantic heterogeneity [1]. Itpart of the mediator is the ability to intelligently is based on the use of a common ontology, whichretrieve and process data [6].Each information source specifies a vocabulary to describe and interpretis incorporated into SIMS by describing the data shared information among its users. It is similar toprovided by that source in terms of the domain the federated schema approach. However, anmodel. This model is contained in the mediator. ontology-based domain model captures much richerSIMS uses a global domain model that also can be semantics and covers a much broader range ofcalled a global ontology. The work presented in [6], knowledge within a target domain . SCROL assumesclassifies SIMS as a single ontology approach. An that the underlying information sources areindependent model of each information source must structured data that may reside in the structurallybe described for this system, along with a domain organized text files or database systems. However,model that must be defined to describe objects and the unprecedented growth of Internet technologiesactions. Further, the authors address the scalability has made vast amounts of resources instantlyand maintenance problems when a new information accessible to various users via the World Wide Websource is added or the domain knowledge changes (WWW)[1].[7].There is no concrete methodology for buildingontologies in The SIMS [6]. COIN Project was initiated in 1991 with the goal of achieving semantics interoperability amongOBSERVER uses the concept of data repository, heterogeneous information sources. The mainwhich might be seen as a set of entity types and elements of this architecture are wrappers, contextattributes. Each repository has a specific data axioms, elevation axioms, a domain model, contextorganization and may or may not have a data mediators, an optimizer and a executioner. A domainmanager [8]. The different data sources of a model in COIN is a collection of primitive types andrepository might be distributed. The architecture is semantic types (similar to type in the object-orientedbased on wrappers, ontology servers and paradigm), which defines the application domainInterontology Relationships Manager (IRM) [9]. corresponding to the data sources that are to beHere, a wrapper is a module, which understands a integrated COIN introduces a new definition forspecific data organization and knows how to retrieve describing things in the world. It states that the truthdata from the underlying repository hiding this of a statement can only be understood with referencespecific data organization. IRM is Synonym to a given context. The context information can berelationships relating the terms in various ontologies obtained by examining the data environment of eachare represented in a declarative manner in an data source [11].independent repository. This enables a solution to thevocabulary problem. OBSERVER is classified as a We discussed different approached for semanticmultiple ontology approach. OBSERVER defines a information integration, the Traditional andmodel for dealing with multiple ontologies avoiding Ontology-Based Approaches. The problem ofproblems about integrating global ontologies [6]. The semantic interoperability is not new, and people havedifferent ontologies (user ontologies) can be tried to achieve semantic interoperability in the pastdescribed using different vocabularies depending on using various approaches. Traditional approachesthe user’s needs. have sometimes been reasonably successful in limited applications, but have proven either veryKRAFT was created assuming dynamic information costly to use, hard to scale to larger applications, orsources [5]. The KRAFT system is the hybrid both. Traditional approaches have certain drawbacks 768 | P a g e
  4. 4. Torky I. Sultan, Mona M. Nasr, Ayman E. Khedr, Walaa S. Ismail / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.766-773that make them inappropriate for integrating In the knowledge representation phase we haveinformation from a large number of data sources. multiple database sources as input, and produceExisting ontology-based approaches for semantic global ontology and semantic catalog as output. Thisinteroperability also have not been sufficiently process is done once when the integration process iseffective because there is no systematic methodology started. Any changes in schemas can be easily addedto follow, no concert methodology for building to the global ontology and the semantic catalog.ontologies and all existing ontology-based Fig. (2) describes the Knowledge representationapproaches use a static data model in reconciling the phase. Assumes that, there are two or moresemantic conflicts. heterogeneous sources in the same domain that are semantically related. It takes the databases as input V. System Architecture and extracts local ontology for each source. Then, The Semantic Conflicts Reconciliation using a reasoning system to determine the(SCR) framework is considered as an ontology based corresponding terms based on suitable matchingsystem aims to solve semantic data level conflicts algorithm. The corresponding terms in the localamong different sources and receivers in a systematic ontologies are merged into new one global ontology.methodology. SCR based on domain specific Now we have a constructed merged ontology for allontology to create user queries. The user can browse sources that we can add annotations to explicitlythe merged ontology and selects specific terms and describe semantics of each data source. Finally, theconditions to create global query. The user query mapping system maps the merged ontology with theterms are mapped to the corresponding terms in each corresponding data sources to facilitate retrieving updata source to decompose the global query to set of to date data from the integrated databases.sub naïve queries. The decomposed sub-queriesconverted to well-formed sub-queries before sending 5.1.1 Database to Ontology Extraction:it to the suitable database.Finally the SCR combine In the ontology extraction step, we have multipleand resend the well-formed query results to the users databases to extracts a local ontology from each one.that matches their contexts. A local ontology contains all database information like tables, columns, relations, constraints. Moreover,SCR consists of two phases, the knowledge it contains an intentional definition to represent arepresentation phase and the interpretation mediation high-level abstraction than relational schema.service phase. 5.1 Knowledge RepresentationThe knowledge representation phase consists of the following components: Ontology Extraction: Extract local ontology from each database. Global/Merged Ontology: Merge all local ontologies to construct a global one that contain all major concepts and the relationships between them. Contexts: Explicitly describe the sources and receivers assumptions about data. Fig. 3: Database to local ontology extraction Mapping: Linking between the constructed merged ontology and the corresponding terms in The local ontology represents a relational database the data sources to produce the semantic catalog. tables as concept and columns as slots of the concept. The local ontologies are represented in a formal standard language called OWL (Ontology Web Language). OWL is the most popular ontology representation language [12]. The OWL Web Ontology Language designed to be used by applications that need to process the contents and meaning of information instead of just presenting information to humans. OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary for describing properties and classes along with a formal semantics. Fig. 2: Knowledge Representation phase Creating local ontology for each database saves them independent. Any changes in the schema or relations can be added easily to its local ontology. The local 769 | P a g e
  5. 5. Torky I. Sultan, Mona M. Nasr, Ayman E. Khedr, Walaa S. Ismail / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.766-773ontology includes only the metadata and additional matching algorithm. Framework uses PROMPTsemantics; however, the database instances or lexical matching algorithm.members still in the data source separated from itsontology.We can use DataMaster to import schema from arelational database into ontologies; it is protégé plug-in [13]. DataMaster uses IDBC/ODBC to connect todatabase (the user chooses one of them). It allowsvariety of translating database schema into ontologydepending on the users application requirements[14].5.1.2 Global Ontology Construction: Fig. 4: The flow of PROMPT algorithm.Merging process aims to create one global (merged)ontology contains multiple local ontologies contents. The PROMPT suggestions and conflicts are shown inThe ontology merging process is the creation of the a new interface, where the expert can modify thenew ontology from two or more ontologies. It suggestions. It uses a semi automatic tool to add thecontains all the knowledge of the initial ontologies. experts perception to the merging process. As aTo create a merged ontology, the corresponding reason for, the matching techniques are considered asobjects will be matched from two or more local helper, but the human view makes the matching deepontologies. Subsequently, suitable matching and increase conflicts detection. Moreover, the SCRalgorithm should choose. Matching is the core of aims to produce one unified view from multiplemerging process to make one vantage point of view ontologies to be agreed among different systems.from multiple ontologies, where some concepts and Hence, it can query the merged ontology and satisfyslots will be represented as a new concept and new the user requirements. Matching and merging classesslots, or some slots may be merged and follow and slots make query processing faster, because weanother concept. We can say that, there is a new query one term instead of query many.structure will be created in the merged ontology. Thisstructure does not affect the information sources, 5.1.3 Explicitly Define Contextsbecause each local ontology is independent. Creatinga standard formal model (merged ontology) makes Contexts explicitly describe assumptions about dataquery multiple databases satisfy the user sources and receivers in a formal way. We considerrequirements at the semantic level. contexts as a collection of value assignments or valueThe proposed framework uses a string-matching specifications for terms in the global ontology thatalgorithm. The string similarity matching calculates we need to describe their semantics.the string distance to determine the matching of Once we described the contexts for each source, theentity names. Before comparing strings, some SCR framework can be automatically detecting andlinguistic technologies must be performed. These reconciling the data-level semantic conflicts amonglinguistic technologies transform each term in a these different data sources and receivers using thestandard form to be easily recognized. interpretation mediation service that can handle theseBased on the result of matching, the system presents conflicts based on analyzing and comparing thesome suggestion to the expert about merging some differences in contexts among sources and receivers.concepts. The expert may take concern to some or all SCR assigns contexts descriptions about sourcesthese suggestions. Then, some concepts and their using the following two steps.slots may be merged or copied as it is, or someoverlapping slots may be merged under a new Annotation: Adding annotation properties to theconcept. Hence, a new structure will be created from global ontology slots to denote their contexts. Wemultiple databases based on semantics of terms and consider annotation properties as special propertiesrelations. that affect the interpretation of data values.The SCR framework uses PROMPT tool to matching Value assignment: Assign value to each annotationand merging local ontologies. PROMPT is a semi- property created in the previous step. We canautomatic tool. It is protégé plug in. It guides the associate more than one annotation property to theexpert by providing suggestions. PROMPT provides same slot (data type property) in the mergedsuggestions about merging and copying classes. Fig. ontology. We can easily adding, removing and(4) explains the PROMPT algorithm [15]. PROMPT changing the assigned values in the ontologytakes two ontologies as input and guide the user to whenever the context changed in the sources overcreate a merged ontology as output. PROMPT time.generates a list of suggestions based on the choose 770 | P a g e
  6. 6. Torky I. Sultan, Mona M. Nasr, Ayman E. Khedr, Walaa S. Ismail / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.766-7735.1.4 Mapping Global Ontology to Multiple columns, all columns in the selected table will be Databases: listed and saved in the table created in the first step along with the correspondence terms in theThe main purpose of the proposed mapping tool is to global ontology. All the primary keys, foreignfind and match between semantically similar terms in keys, and referenced tables for each table in thethe global query with the corresponding terms in the selected database automatically retrieved anddata sources of integrated system. The output of the saved in the second created table as metadata, tomapping process is the semantic catalog. use it in query processing.We created the merged ontology, which is a standard  At the end of mapping process, a Semanticmodel, represents different database systems. Catalog has created. The Semantic CatalogHowever, there is not any link between the merged contains a semantic mapping data among multipleontology and databases. The merged ontology does databases and their Metadata. Hence, multiplenot contain databases instances as well. Thus, we database systems can be queried through aneed to link between the merged ontology and the merged ontology using the Semantic Catalog, andintegrated databases in order to retrieve up to date retrieves data up to date. Any changes in thedata from multiple sources. Each term in the merged integrated sources can easily reflected in theontology must be linked to the corresponding terms semantic catalog.in the integrated databases. The output from themapping process is the Semantic Catalog of the 5.2 Interpretation Mediation Serviceintegrated databases. The Semantic Catalog containsa mapping data and a metadata, which has been Knowledge representation phase is not enough tocollected automatically during the mapping process. solve semantic conflicts among data sources andWe developed a semi automatic mapping tool used to receivers. The second phase in our system is themap a merged ontology to multiple databases Fig. interpretation mediation service in which the user(5). interacts with the system through graphical user interface (GUI).The GUI displays the global ontology terms to the user to facilitate finding the global query terms easily and quickly .User browse the global ontology to selects specific terms for his query. User submit query without take into account that both sources and receivers may have different interpretation of data contexts. In order to retrieve correct results from different integrated data sources queries should be rewritten into mediated queries in which all semantic conflicts between sources and receivers automatically detected and solved but it is a big challenge for users. We cannot suppose that users have intimate knowledge about data sources being Fig 5: The proposed mapping tool queried especially when the number of these sources are big. User should remain isolated from semantic conflicts problems and no need to learn one of theThe mapping process in our mapping tool follows the ontology query languages .The SCR frameworkfollowing steps: helps user to query integrated sources with minimal efforts using Query-by-Example language (QBE) as The mapping process started by creating a a result he needs only to know little information database with two tables, to save the mapping about the global ontology terms. data in the first table, and saving the metadata of Interpretation Mediation Service as in Fig. (5) the database system in the second table. This consists of the following main three components: process is done once when the mapping process is started.  Query preprocessor After creating the database, the expert selects the  SQL generator first database to link its schema (intentional  Converter (query engine) relation) with the terms in the global ontology. When the user select database from a list of all databases existed then all tables in the selected database will be listed. Then, press to select 771 | P a g e
  7. 7. Torky I. Sultan, Mona M. Nasr, Ayman E. Khedr, Walaa S. Ismail / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.766-773 Fig. 7: General form of the conversion functions. Fig. 6: Architecture of the SCR system. Conversion functions represent the conversions5.2.1 Query Preprocessor among all annotation property values or contexts described in the merged ontology. In the SCR thereQuery preprocessor accepts a naive user query and is no relation between the number of sources orsemantic catalog as input and produce blocks of receivers and the number of conversion functions, ituser’s data based on the selected items from the user depends on the contexts or the annotation propertyglobal query. Each block represents a query but values for each source whether it match the userwithout any language format. Once the user selects query context or not .terms and conditions from the system the querypreprocessor does the following actions. Query preprocessor access the semanticcatalog (mapping file) to retrieve the database name,table name and columns names that mapped to theselected terms and conditions in the user query. The query preprocessor reorganizes theretrieved data from the previous step into blocksaccording to the database name.5.2.2 SQL GeneratorSQL generator turns the query blocks received fromthe query preprocessor into SQL queries and directsthem to the converter. It uses the semantic catalog(metadata) to translate the previous blocks into SQL Fig. 8: Pseudo code describes the basic specificationscorrect syntax. To transform the blocks to correct of the Interpretation Mediation Service algorithm.syntax the generator add select, from and whereclauses. In addition, if the query needs to retrieve Fig. (8) describes the basic specifications of theinstances from more than one table the primary keys, Interpretation Mediation Service algorithm in whichforeign keys and referenced tables from the converter receives well-defined quires from SQLintegrated databases may be added from the semantic generator and before sending it to the suitablecatalog metadata file as well. database; it connects to the merged ontology and compare each attribute value context in the submitted query with the correspondence annotation property5.2.3 Converter value in the ontology. If contexts mismatch with eachWe consider converter as the query engine that takes other converter has to connect with the externalSQL query from the SQL generator and the user conversion functions to reconcile it before sendingcontext as input. Converter transforms the user naïve the query to the suitable database. Converter receivesquery (that ignores differences in assumptions and recombines the sub queries results from thebetween sources) into well-formed query that respect integrated database and finally resends the querydifferences among sources and receivers contexts. results to the user that matches his contexts.Converter reconciles semantic conflicts at query timebefore sending the query to the suitable database, and VI. Conclusionthen resends the correct result to the user that In this paper, we recommended systemmatches his contexts. Fig. (7) describes the general architecture for detecting and reconciling data-levelform of the conversion functions. semantic conflicts. The Ontology-based system 772 | P a g e
  8. 8. Torky I. Sultan, Mona M. Nasr, Ayman E. Khedr, Walaa S. Ismail / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.766-773provides separation between mediation services and institute, University of Southern California,the knowledge representation, which support sharing U.S.A.,1992.and reusing of semantic encoding and allowing [8] Buccella,A., Cechich, A., Nieves R. Brisaboa,independence of the contexts in ontology. As the Ontology-Based Data Integration Methods: Aresult, we can easily modify or adding new contexts Framework for Comparison. Revistain ontology if users change their assumption over Comlombiana de Computación, 6(1),time without effecting the mediator. With the SCR [doi],2008.framework user in any context can ask queries over [9] Mena, E., Kashyap, V., Sheth, A. andany number of data sources in other contexts as if Illarramendi, A.Observer , An approach forthey were in the same context without burden them query processing in global information systemswith the semantic conflicts in data sources. based on interoperation across pre-existingOntologies are represented in a formal standard ontologies, Distributed and Parallellanguage called OWL, which makes them possible to Databases,Volume 8 Issue 2,be exchanged and processed by other applications Pages 223 – 271, April 2000.easily. The SCR preserve local autonomy for each [10] Visser, P. R. S., Jones, D. M., Bench-Capon, T.data source to change and maintain independently. J. M. and Shave, M. J. R. Assessing heterogeneity by,classifying ontologyReferences mismatches. In Proceedings of the[1] Ram ,S., Park, J., Semantic Conflict Resolution International Conference on Formal Ontology Ontology (SCROL): An Ontology for in Information Systems (FOIS’98), IOS Press, Detecting and Resolving Data and Schema- 148–162, 1998. Level Semantic Conflicts, IEEE Transactions [11] Zhu,H.,Madnick,S., Reconciliation of temporal on Knowledge and Data Engineering, v.16 n.2 , semantic heterogeneity in evolving information p.189-202, 2004. systems", ESD-WP-2009-03, Massachusetts[2] Madnick S., Gannon T., Zhu, H., Siegel M., Institute of Technology Cambridge, MA, Moulton A.,Sabbouh M., Framework for the USA,2009. Analysis of the Adaptability, Extensibility, and [12] V. Devedizic ,Model Driven Architecture and Scalability of Semantic Information Integration ontology development, Springer_Verlag Berlin and the Context Mediation Approach, Hekleberg, second edition,2006. MIT,2009. [13] Nyulas, M. Connor, S. Tu, DataMaster_a plug[3] Rosenthal, A., Seligman, L. and Renner, S. in for Relational Databases into protégé, From Semantic Integration to Semantics Stanford University, 10th international protégé Management: Case Studies and a Way conference,2007. Forward, ACM SIGMOD Record, 33(4), 44-50, [14] 2004. http://protogewiki.stanford.edu/wiki/DataM[4] Zhu, H., Effective Information Integration and aster, last visit December 2011. Reutilization: Solutions to Deficiency and [15] Noy, N. F. and Musen, M. A., PROMPT: Legal Uncertainty, PhD Thesis, Massachusetts Algorithm and tool for automated ontology Institute of Technology Cambridge, MA, USA, merging and alignment, National Conference 2005. on Artificial Intelligence - AAAI , pp. 450-455, [5] Hajmoosaei, A. and Abdul Kareem, S., An 2000 ontology-based approach for resolving semantic schema conflicts in the extraction and integration of query-based information from heterogeneous web data sources. In Proc. Third Australasian Ontology Workshop (AOW 2007), Gold Coast, Australia. CRPIT, 85. Meyer, T. and Nayak, A. C., Eds. ACS. 35-43.[6] H. Wache, T. Vogele, U. Visser, H. Stuckenschmidt, G.Schuster, H. Neumann and S. Hubner, Ontology-Based Integration of Information - A Survey of Existing Approaches , In Proceedings of the IJCAI-01 Workshop on Ontologies and Information Sharing, pp. 108- 117,2001.[7] Arens, Y., Ciiee, Y., Knoblock. A. ,SIMS:Integrating data from multiple information sources. Information science 773 | P a g e