• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Dc32644652
 

Dc32644652

on

  • 144 views

 

Statistics

Views

Total Views
144
Views on SlideShare
144
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Dc32644652 Dc32644652 Document Transcript

    • Taghi Karimi / International Journal of Engineering Research and Applications (IJERA)ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 2, March -April 2013, pp.644-652644 | P a g eA Novel Expert System for Reasoning Based on Semantic CancerTaghi KarimiDepartment of Mathematics, Payame Noor University, Tehran, IranAbstractThis paper describes a programmatic frameworkfor representing, manipulating and reasoning withgeographic semantics. The framework enablesvisualizing knowledge discovery, automating toolselection for user defined geographic problemsolving, and evaluating semantic change inknowledge discovery environments. Methods,data, and human experts (our resources) aredescribed using ontologies. An entity’s ontologydescribes, where applicable: uses, inputs, outputs,and semantic changes. These ontologicaldescriptions are manipulated by an expert systemto select methods, data and human experts to solvea specific user-defined problem; that is, asemantic description of the problem is comparedto the services that each entity can provide toconstruct a graph of potential solutions. Aminimal spanning tree representing the optimal(least cost) solution is extracted from this graph,and displayed in real-time. The semanticchange(s) that result from the interaction of data,methods and people contained within the resultingtree are determined via expressions oftransformation semantics represented within theJESS expert system shell. The resultingdescription represents the formation history ofeach new information product (such as a map oroverlay) and can be stored, indexed and searchedas required. Examples are presented to show (1)the construction and visualization of informationproducts, (2) the reasoning capabilities of thesystem to find alternative ways to produceinformation products from a set of data methodsand expertise, given certain constraints and (3) therepresentation of the ensuing semantic changes bywhich an information product is synthesized.I. IntroductionThe importance of semantics in geographicinformation is well documented [1-4]. Semantics area key component of interoperability between GIS;there are now robust technical solutions tointeroperate geographic information in a syntacticand schematic sense (e.g. OGC, NSDI) but these failto take account of any sense of meaning associatedwith the information. describe how exchanging databetween systems often fails due to confusion in themeaning of concepts. Such confusion, or semanticheterogeneity, significantly hinders collaboration ifgroups cannot agree on a common lexicon for coreconcepts. Semantic heterogeneity is also blamed forthe inefficient exchange of geographic concepts andinformation between groups of people with differingontologies [5-9].Semantic issues pervade the creation, use and re-purposing of geographic information. In aninformation economy we can identify the roles ofinformation producer and information consumer, andin some cases, in national mapping agencies forexample, datasets are often constructed incrementallyby different groups of people with an implicit (butnot necessarily recorded) goal. The overall meaningof the resulting information products are not alwaysobvious to those outside that group, existing for themost part in the creators‘ mental model. Whensolving a problem, a user may gather geospatialinformation from a variety of sources without everencountering an explicit statement about what thedata mean, or what they are (and are not) useful for.Without capturing the semantics of the datathroughout the process of creation, the data may bemisunderstood, be used inappropriately, or not usedat all when they could be. The consideration ofgeospatial semantics needs to explicitly cater for theparticular way in which geospatial tasks areundertaken [10]. As a result, the underlyingassumptions about methods used with data, and theroles played by human expertise need to berepresented in some fashion so that a meaningfulassociation can be made between appropriatemethods, people and data to solve a problem. It is notthe role of this paper to present a definitive taxonomyof geographic operations or their semantics. To do sowould trivialize the difficulties of defininggeographic semanticsII. Background and AimsThis paper presents a programmaticframework for representing, manipulating andreasoning with geographic semantics. In generalsemantics refers to the study of the relations betweensymbols and what they represent [11]. In theframework outlined in this paper, semantics have twovaluable and specific roles. Firstly, to determine themost appropriate resources (method, data or humanexpert) to use in concert to solve a geographicproblem, and secondly to act as a measure of changein meaning when data are operated on by methodsand human experts. Both of these roles are discussedin detail in Section 3. The framework draws on anumber of different research fields, specifically:geographical semantics [11-13]ontologiescomputational semantics, constraint-based reasoningand expert systems and visualization to represent
    • Taghi Karimi / International Journal of Engineering Research and Applications (IJERA)ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 2, March -April 2013, pp.644-652645 | P a g easpects of these resources. The framework sets out tosolve a multi-layered problem of visualizingknowledge discovery, automating tool selection foruser defined geographic problem solving andevaluating semantic change in knowledge discoveryenvironments. The end goal of the framework is toassociate with geospatial information products thedetails of their formation history and tools by whichto browse, query and ultimately understand thisformation history, thereby building a betterunderstanding of meaning and appropriate use of theinformation.The problem of semantic heterogeneity arises due tothe varying interpretations given to the terms used todescribe facts and concepts. Semantic heterogeneityexists in two forms, cognitive and naming. Cognitivesemantic heterogeneity results from no common baseof definitions between two (or more) groups. As anexample, think of these as groups of scientistsattempting to collaborate. If the two groups cannotagree on definitions for their core concepts thencollaboration between them will be problematic.Defining such points of agreement amounts toconstructing a shared ontology, or at the very least,points of overlap [12-17].Naming semantic heterogeneity occurs when thesame name is used for different concepts or differentnames are used for the same concept. It is notpossible to undertake any semantic analysis untilproblems of semantic heterogeneity are resolved.Ontologies, described below, are widelyrecommended as a means of rectifying semanticheterogeneity. The framework presented in this paperutilizes that work and other ontological research tosolve the problem of semantic heterogeneity.The use of an expert system for automated reasoningfits well with the logical semantics utilized within theframework. The Java Expert System Shell (JESS) isused to express diverse semantic aspects aboutmethods, data, and human experts. JESS performsstring comparisons of resource attributes (parsedfrom ontologies) using backward chaining todetermine interconnections between resources.Backward chaining is a goal driven problem solvingmethodology, starting from the set of possiblesolutions and attempting to derive the problem. If theconditions for a rule to be satisfied are not foundwithin that rule, the engine searches for other rulesthat have the unsatisfied rule as their conclusion,establishing dependencies between rules. JESSfunctions as a mediator system, with a foundationlayer where the methods, data, and human experts aredescribed (the domain ontology), a mediation layerwith a view of the system (the task ontology) and auser interface layer (for receiving queries anddisplaying results).1.1 OntologyIn philosophy, Ontology is the ―study of the kinds ofthings that exist‖ In the artificial intelligencecommunity, ontology has one of two meanings, as arepresentation vocabulary, typically specialized tosome domain or subject matter and as a body ofknowledge describing some domain using such arepresentation vocabulary [18]. The goal of sharingknowledge can be accomplished by encoding domainknowledge using a standard vocabulary based on anontology. The framework described here utilizes bothdefinitions of ontology.The representation vocabulary embodies theconceptualizations that the terms in the vocabularyare intended to capture. Relationships describedbetween conceptual elements in this ontology allowFigure 1 – Interrelationships between different types ofontology.Figure 2 – Information products arederived from the interaction of entities.
    • Taghi Karimi / International Journal of Engineering Research and Applications (IJERA)ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 2, March -April 2013, pp.644-652646 | P a g efor the production of rules governing how theseelements can be ―connected‖ or ―wired together‖ tosolve a geographic problem. In our case theseelements are methods, data and human experts, eachwith their own ontology. In the case of datasets, adomain ontology describes salient properties such aslocation, scale, date, format, etc., as currentlycaptured in meta-data descriptions (see Figure 1). Inthe case of methods, a domain ontology describes theservices a method provides in terms of atransformation from one semantic state to another. Inthe case of human experts the simplest representationis again a domain ontology that shows thecontribution that a human can provide in terms ofsteering or configuring methods and data. However,it should also be possible to represent a two-way flowof knowledge as the human learns from situationsand thereby expands the number of services they canprovide (we leave this issue for future work). Thesynthesis of a specific information product isspecified via a task ontology that must fuse togetherelements of the domain and application ontologies toattain its goal. An innovation of this framework is thedynamic construction of the solution network,analogous to the application ontology. In order forresources to be useful in solving a problem, theirontologies must also overlap. Ontology is a usefulmetaphor for describing the genesis of theinformation product. A body of knowledge describedusing the domain ontology is utilized in the initialphase of setting up the expert system. A taskontology is created at the conclusion of theautomated process specifically defining the conceptsthat are available. An information product is derivedfrom the use of data extracted from databases andknowledge from human experts in methods, as shownin figure 2.By forming a higher level ontology which describesthe relationships between each of these resources it ispossible to describe appropriate interactions. As asimple example (figure 3), assume a user wishes toevaluate landuse/landcover change over a period oftime. Classifying two LandsatTM images from 1990and 2000 using reference data and expert knowledge,the user can compare the two resulting images toproduce a map of the area(s) which have changedduring that time. One interesting featuredemonstrated in this example is the ability of humanexperts to gain experience through repeated exposureto similar situations. Even in this simple example abasic semantic structure is being constructed and alineage of the data can be determined. Arguably anadditional intermediate data product exists betweenthe classifiers and the comparison; it has beenremoved for clarity.1.2 SemanticsWhile the construction of the information product isimportant, a semantic layer sits above the operationsand information (figure 4). The geospatial knowledgeobtained during the creation of the product iscaptured within this layer. The capture of thissemantic information describes the transformationsthat the geospatial information undergoes, facilitatingbetter understanding and providing a measure ofrepeatability of analysis, and improvingcommunication in the hope of promoting bestpractice in bringing geospatial information to bear.Egenhofer (2002) notes that the challenge remains ofhow best to make these semantics available to theuser via a search interface. Pundt and Bishr, (2002)outline a process in which a user searches for data tosolve a problem. This search methodology is alsoapplicable for the methods and human experts to beused with the data. This solution fails when multiplesources are available and nothing is known of theircontent, structure and semantics. The use of pre-defined ontologies aids users by reducing theavailable search space. Ontological concepts relevantto a problem domain are supplied to the user allowingthem to focus their query. A more advanced interfacewould take the user‘s query in their own terms andmap that to an underlying domain ontology (Bishr,1998).As previously noted, the meaning of geospatialinformation is constructed, shaped and changed bythe interaction of people and systems. Subsequentlythe interaction of human experts, methods and dataneeds to be carefully planned. A product created as aresult of these interactions is dependent on theontology of the data and methods and theFigure 3 – A concrete example of the interaction ofmethods, data and human experts to produce aninformation product.
    • Taghi Karimi / International Journal of Engineering Research and Applications (IJERA)ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 2, March -April 2013, pp.644-652647 | P a g eepistemologies and ontologies of the human experts.In light of this, the knowledge framework outlinedbelow focuses on each of the resources involved(data, methods and human experts) and the roles theyplay in the evolution of a new information product.In addition, the user‘s goal that produced the product,and any constraints placed on the process arerecorded to capture aspects of intention and situationthat also have an impact on meaning. This processand the impact of constraint based searches arediscussed in more detail in the following section.III. Knowledge frameworkThe problem described in the introduction hasbeen implemented as three components. The first,and the simplest, is the task of visualizing thenetwork of interactions by which new informationproducts are synthesized. The second, automatingthe construction of such a network for a user-definedtask, is interdependent with the third, evaluatingsemantic change in a knowledge discoveryenvironment, and both utilize functionality of thefirst. An examination of the abstract properties ofdata, methods and experts is followed by anexplanation of these components and their inter-relationships.1.3 Formal representation of components andchangesThis section explains how the abstractproperties of data, methods and experts arerepresented, and then employed to track semanticchanges as information products are producedutilizing tools described above. From the descriptionin Section 2 it should be evident that such changesare a consequence of the arrangement of data,computational methods and expert interaction appliedto data. At an abstract level above that of the dataand methods used, we wish to represent somecharacteristics of these three sets of components in aformal sense, so that we can describe the effectsderiving from their interaction. One strong caveathere is that our semantic description (describedbelow) does not claim to capture all senses ofmeaning attached to data, methods or people, and infact as a community of researchers we are stilllearning about which facets of semantics areimportant and how they might be described. It is notcurrently possible to represent all aspects of meaningand knowledge within a computer, so we aim insteadto provide descriptions that are rich enough to allowusers to infer aspects of meaning that are importantfor specific tasks from the visualizations or reportsthat we can synthesize. In this sense our owndescriptions of semantics play the role of asignifier—the focus is on conveying meaning to thereader rather than explicitly carrying intrinsicmeaning per-se.The formalization of semantics based on ontologiesand operated on using a language capable ofrepresenting relations provides for powerfulsemantic modelling (Kuhn, 2002). The framework,rules, and facts used in the Solution SynthesisEngine (see below) function in this way.Relationships are established between each of theentities, by calculating their membership within a setof objects capable of synthesizing a solution. Weextend the approach of Kuhn by allowing the user tonarrow a search for a solution based on the specificsemantic attributes of entities. Using the minimalspanning tree produced from the solution synthesis itis possible to retrace the steps of the process tocalculate semantic change. As each fact is asserted itcontains information about the rule that created it (themethod) and the data and human experts that wereidentified as resources required. If we are able todescribe the change to the data (in terms of abstractsemantic properties) imbued by each of the processesthrough which it passes, then it is possible torepresent the change between the start state and thefinish state by differencing the two.Although the focus of our description is onsemantics, there are good reasons for includingsyntactic and schematic information about data andmethods also, since methods generally are designedto work in limited circumstances, using andproducing very specific data types (pre-conditionsand post-conditions). Hence from a practicalperspective it makes sense to represent and reasonwith these aspects in addition to semantics, since theywill limit which methods can be connected togetherand dictate where additional conversion methods arerequired. Additional potentially useful propertiesarise when the computational and humaninfrastructure is distributed e.g. around a network.By encoding such properties we can extend ourreasoning capabilities to address problems that ariseFigure 4 – Interaction of the semantic layer andoperational layer.
    • Taghi Karimi / International Journal of Engineering Research and Applications (IJERA)ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 2, March -April 2013, pp.644-652648 | P a g ewhen resources must be moved from one node toanother to solve a problem.IV. Description of dataAs mentioned in Section 2, datasets aredescribed in general terms using a domain ontologydrawn from generic metadata descriptions. Existingmetadata descriptions hold a wealth of such practicalinformation that can be readily associated withdatasets; for example the FGDC (1998) defines a mixof semantic, syntactic and schematic metadataproperties. These include basic semantics (abstractand purpose), syntactic (data model information, andprojection), and schematic (creator, theme, temporaland spatial extents, uncertainty, quality and lineage).We explicitly represent and reason with a subset ofthese properties in the work described here and couldeasily expand to represent them all, or any othergiven metadata description that can be expressedsymbolically. Formally, we represent the set of nproperties of a dataset D as:  npppD ,,, 21 (Gahegan, 1996).V. Describing MethodsWhile standards for metadata descriptionsare already mature and suit our purposes,complementary mark-up languages for methods arestill in their infancy. It is straightforward to representthe signature of a method in terms of the format ofdata entering and leaving the method, and knowingthat a method requires data to be in a certain formatwill cause the system to search for and insertconversion methods automatically where they arerequired. So, for example, if a coverage must beconverted from raster format to vector format beforeit can be used as input to a surface flow accumulationmethod, then the system can insert appropriate dataconversion methods into the evolving query tree toconnect to appropriate data resources that wouldotherwise not be compatible. Similarly, if an imageclassification method requires data at a nominal scaleof 1:100,000 or a pixel size of 30m, any data at finerscales might be generalized to meet this requirementprior to use. Although such descriptions have greatpractical benefit, they say nothing about the role themethod plays or the transformation it imparts to thedata; in short they do not enable any kind of semanticassessment to be made.A useful approach to representing what GISmethods do, in a conceptual sense, centers on atypology (e.g. Albrecht‘s 20 universal GIS operators,1994). Here, we extend this idea to address a numberof different abstract properties of a dataset, in termsof how the method invoked changes these properties(Pascoe & Penny, 1995; Gahegan, 1996). In ageneral sense, the transformation performed by amethod (M) can be represented by pre-conditions andpost-conditions, as is common practice with interfacespecification and design in software engineering.Using the notation above, our semantic descriptiontakes theform:   ,,,,,,: 2121 nOperationn pppDpppDM   , where Operation is a generic description of the roleor function the method provides, drawn from atypology.For example, a cartographic generalization methodchanges the scale at which a dataset is mostapplicable, a supervised classifier transforms an arrayof numbers into a set of categorical labels, anextrapolation method might produce a map for nextyear, based on maps of the past. Clearly, there areany number of key dimensions over which suchchanges might be represented; the above exampleshighlight spatial scale, conceptual ‘level’ (which at abasic syntactic level could be viewed simply asstatistical scale) and temporal applicability, or simplytime. Others come to light following just a cursoryexploration of GIS functionality: change in spatialextents, e.g. windowing and buffering, change inuncertainty (very difficult in practice to quantify buteasy to show in an abstract sense that there has been achange).Again, we have chosen not to restrict ourselves to aspecific set of properties, but rather to remain flexiblein representing those that are important to specificapplication areas or communities. We note that asWeb Services become more established in the GISarena, such an enhanced description of methods willbe a vital component in identifying potentially usefulfunctionality.VI. Describing PeopleOperations may require additionalconfiguration or expertise in order to carry out theirtask. People use their expertise to interact with dataand methods in many ways, such as gathering,creating and interpreting data, configuring methodsand interpreting results. These activities are typicallystructured around well-defined tasks where thedesired outcome is known, although as in the case ofknowledge discovery, they may sometimes be morespeculative in nature. In our work we have cast thevarious skills that experts possess in terms of theirability to help achieve some desired goal. This, inturn, can be re-expressed as their suitability tooversee the processing of some dataset by somemethod, either by configuring parameters, supplyingjudgment or even performing the task explicitly. Forexample, an image interpretation method may requireidentification of training examples that in turnnecessitate local field knowledge; such knowledgecan also be specified as a context of applicabilityusing the time, space, scale and theme parametersthat are also used to describe datasets. As such, agiven expert may be able to play a number of roles
    • Taghi Karimi / International Journal of Engineering Research and Applications (IJERA)ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 2, March -April 2013, pp.644-652649 | P a g ethat are required by the operations described above,with each role described as: nOperationpppE ,,,: 21   , meaning thatexpert E can provide the necessary knowledge toperform Operation within the context of p1…, pn. Soto continue the example of image interpretation, p1…,pn might represent (say) floristic mapping of WesternAustralia, at a scale of 1:100,000 in the present day.At the less abstract schematic level, locationparameters can also be used to express the need tomove people to different locations in order to conductan analysis, or to bring data and methods distributedthroughout cyberspace to the physical location of aperson.Another possibility here, that we have not yetimplemented, is to acknowledge that a person‘sability to perform a task can increase as a result ofexperience. So it should be possible for a system tokeep track of how much experience an expert hasaccrued by working in a specific context (describedas p1…, pn). (In this case the expert expressionwould also require an experience or suitability scoreas described for constraint management describedbelow in section 3.3). We could then represent afeedback from the analysis exercise to the user,modifying their experience score.1.4 Visualization of knowledge discoveryThe visualization of the knowledge discoveryprocess utilizes a self organising graph package(TouchGraph) written in Java. TouchGraph enablesusers to interactively construct an ontology utilizingconcepts (visually represented as shapes) andrelationships (represented as links between shapes).Each of the concepts and relationships can haveassociated descriptions that give more details for eachof the entity types (data, methods, and people). Asample of the visualization environment is shownbelow in figure 5.Touchgraph supports serialization allowing thedevelopment of the information product to berecorded and shared among collaborators.Serialization is the process of storing and convertingan object into a form that can be readily reused ortransported. For example, an ontology can beserialized and transported over the Internet. At theother end, deserialization reconstructs the object fromthe input stream. Information products describedusing this tool are stored as DAML+OIL objects sothat the interrelationships between concepts can bedescribed semantically. The DAML+OIL architecturewas chosen as the goal of the DARPA Agent MarkupLanguage component (DAML) is to capture termmeanings, and thereby providing a Web ontologylanguage. The Ontology Interchange Language (OIL)contains formal semantics and efficient reasoningsupport, epistemological rich modeling primitives,and a standard proposal for syntactical exchangenotations (http://www.ontoknowledge.org/oil/).1.5 Solution Synthesis EngineThe automated tool selection process orsolution synthesis is more complex relying on domainontologies of the methods, data and human experts(resources) that are usable to solve a problem. Thetask of automated tool selection can be divided into anumber of phases. First is the user‘s specification ofthe problem, either using a list of ontologicalkeywords or in their own terms which are mapped toan underlying ontology. Second ontologies ofmethods, data and human experts need to beprocessed to determine which resources overlap withthe problem ontology. Third, a description of theuser‘s problem and any associated constraints isparsed into an expert system to define rules thatdescribe the problem. Finally networks of resourcesthat satisfy the rules need to be selected anddisplayed.Defining a complete set of characteristic attributes forreal world entities (such as data, methods and humanexperts) is difficult due to problems selectingattributes that accurately describe the entity. Bishr‘ssolution of using cognitive semantics to solve thisproblem, by referring to entities based on theirfunction, is implemented in this framework. Methodsutilize data or are utilized by human experts and aresubject to conditions regarding their use such as dataformat, scale or a level of human knowledge. Therules describe the requirements of the methods (‗if‘)and the output(s) of the methods (‗then‘). Data andhuman experts, specified by facts, are arguably morepassive and the rules of methods are applied to or bythem respectively. A set of properties governing howrules may use them are defined for data, (e.g. format,spatial, and temporal extents) and human experts(e.g. roles and abilities) using an XML schema andparsed into facts.Figure 5 – A sample of the visualization environment.
    • Taghi Karimi / International Journal of Engineering Research and Applications (IJERA)ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 2, March -April 2013, pp.644-652650 | P a g eThe first stage of the solution synthesis is the userspecification of the problem using concepts andkeywords derived from a problem ontology. Theproblem ontology, derived from the methods, dataand human expert ontologies, consist of conceptsdescribing the intended uses of each of the resources.This limitation was introduced to ensure theframework had access to the necessary entities tosolve a user‘s problem. A more advanced version ofthe problem specification is proposed which usesnatural language parsing to allow the user to specify aproblem. This query would then be mapped to theproblem ontology allowing the user to use their ownsemantics instead of being governed by those of thesystem.The second stage of the solution synthesis processparses the rules and facts describing relationshipsbetween data, methods, and human experts. TheJESS rule, compare (Table 1), illustrates theinteraction between the rule (or method)requirements and the facts (data and human experts).Sections of the rule not essential for illustrating itsfunction have been removed. It is important to notethat these rules do not perform the operationsdescribed rather they mimic the semantic change thatwould accompany such an operation. The futurework section outlines the goal of running this systemin tandem with a codeless programming environmentto run the selected toolset automatically.Table 1 - JESS Sample code(1) defrule compare ;; compare two data sets(2) (need-comparison_result $?)(3) (datasource_a ?srcA)(4) (datasource_b ?srcB)(5) intersection_result <- (intersect ?srcA ?srcB)(6) union_result <- (union ?srcA ?srcB)(7) => ;; THEN(8) (assert (comparison_result (inputA ?srcA)(inputB ?srcB) (intersect ?intersection_result) (union?union_result) ;; ―perform‖ the operationWith all of the resource rules defined, the missinglink is the problem to be solved using these rules. Theproblem ontology is parsed into JESS to create a setof facts. These facts form the ―goal‖ rule whichmirrors the user‘s problem specification. Each of thefacts in the ‗if‘ component of the goal rule are in theform ‗need-method_x‘. The JESS engine now has therequisite components for tool selection.Utilizing backward-chaining JESS searches for ruleswhich satisfy the left hand side (LHS) of the rule. Inthe case of dependencies (rules preceded by ―need-‖)JESS searches for rules that satisfy the ―need-‖request and runs them prior to running the rulegenerating the request. The compare rule (above)runs only when a previous rule requires acomparison_result fact to be asserted in order for thatrule to be completed.The compare rule (Table 1) has dependencies onrules that collect data sources (used for comparisons)and the rules that accomplish those comparisons(intersection and union). If each of these rules can besatisfied on the ―if‖ side of the clause, then the resultsof the comparison rules are stored, together with thedata sources that were used in the comparison and theproducts of the comparison. The results of the rule―firing‖ are stored in a list that will be used to form aminimal spanning tree for graphing [10-30].As the engine runs, each of the rules ―needed‖ aresatisfied using backward chaining, the goal isfulfilled, and a network of resources is constructed.As each rule fires and populates the network a set ofcriteria is added to a JESS fact describing each of theuser criteria that limits the network. Each of thesecriteria is used to create a minimal spanning tree ofoperations. User criteria are initially based upon thekey spatial concepts of identity, location, direction,distance, magnitude, scale, time availability,operation time, and semantic change.Users specify the initial constraints, via the userinterface (figure 6) prior to the automated selection oftools. As an example, a satellite image is required foran interpretation task, but the only available data is30 days old and data from the next orbit over theregion will not be available for another 8 hours. Is it―better‖ to wait for that data to become available or isit more crucial to achieve a solution in a shorter timeusing potentially out of date data? It is possible thatthe user will request a set of limiting conditions thatare too strict to permit a solution. In these cases allpossible solutions will be displayed allowing the userto modify their constraints. The user specifiedconstraints are used to prune the network of resourcesconstructed (i.e. all possible solutions to the problem)to a minimal spanning tree which is the solution thatsatisfies all of the user‘s constraints.VII. ResultsThis section presents the results of the framework‘sFigure 6 – Interface showing constraint selection.
    • Taghi Karimi / International Journal of Engineering Research and Applications (IJERA)ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 2, March -April 2013, pp.644-652651 | P a g esolution synthesis and representation of semanticchange. The results of the knowledge discoveryvisualization are implicit in this discussion as thatcomponent is used for the display of the minimalspanning tree.A sample problem, finding a home location with asunset view is used to demonstrate the solutionsynthesis. In order to solve this problem, raster(DEM) and vector (road network) data needs to beintegrated. A raster overlay, using map algebra,followed by buffer operations is required to findsuitable locations, from height, slope and aspect data.The raster data of potential sites needs to beconverted to a vector layer to enable a bufferingoperation with vector road data. Finally a viewshedanalysis is performed to determine how much of thelandscape is visible from candidate sites.The problem specification was simplified by hard-coding the user requirements into a set of facts loadedfrom an XML file. The user‘s problem specificationwas reduced to selecting pre-defined problems from amenu.A user constraint of scale was set to ensure that dataused by the methods in the framework was at aconsistent scale and appropriate data layers wereselected based on their metadata and format. With theuser requirements parsed into JESS and a problemselected, the solution engine selected the methods,data and human experts required to solve theproblem. The solution engine constructed a set of allpossible combinations and then determined theshortest path by summing the weighted constraintsspecified by the user. Utilizing the abstract notationfrom above, with methods specifying change thus:   ,,,,,,: 21211 nOperationn pppDpppDM   , the user weights were included and summed for allmodified data sets:   ,,,,...,,, 221122111 nnnnn pupupuDpupupuD . As a result of this process the solution set is pruneduntil only the optimal solution remains (based on userconstraints).VIII. Future WorkThe ultimate goal of this project is tointegrate the problem solving environment with thecodeless programming environment GEOVISTAStudio currently under development at PennsylvaniaState University. The possibility of supplying data tothe framework and determining the types of questionswhich could be answered with it is also an interestingproblem.A final goal is the use of natural language parsing ofthe user‘s problem specification.IX. ConclusionsThis paper outlined a framework forrepresenting, manipulating and reasoning withgeographic semantics. The framework enablesvisualizing knowledge discovery, automating toolselection for user defined geographic problemsolving, and evaluating semantic change inknowledge discovery environments. A minimalspanning tree representing the optimal (least cost)solution was extracted from this graph, and can bedisplayed in real-time. The semantic change(s) thatresult from the interaction of data, methods andpeople contained within the resulting tree representsthe formation history of each new informationproduct (such as a map or overlay) and can be stored,indexed and searched as required.X. AcknowledgementsOur thanks go to Sachin Oswal, who helpedwith the customization of the TouchGraph conceptvisualization tool used here. This work is partlyfunded by NSF grants: ITR (BCS)-0219025 and ITRGeosciences Network (GEON).References[1] Abel, D.J., Taylor, K., Ackland, R., andHungerford, S. 1998, An Exploration of GISArchitectures for Internet Environments.Computers, Environment and Urban Systems.22(1) pp 7 -23.[2] Albrecht, J., 1994. Universal elementary GIStasks- beyond low-level commands. In Waugh TC and Healey R G (eds) Sixth InternationalSymposium on Spatial Data Handling : 209-22.Figure 7 – Diagram showing optimal path derived fromthinned network
    • Taghi Karimi / International Journal of Engineering Research and Applications (IJERA)ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 2, March -April 2013, pp.644-652652 | P a g e[3] Bishr, Y., 1997. Semantic aspects of interoperableGIS. Ph.D Dissertation Thesis, Enschede, TheNetherlands, 154 pp.[4] Bishr, Y., 1998. Overcoming the semantic andother barriers to GIS interoperability.International Journal of GeographicalInformation Science, 12(4): 299-314.[5] Brodaric, B. and Gahegan, M., 2002.Distinguishing Instances and Evidence ofGeographical Concepts for Geospatial DatabaseDesign. In: M.J. Egenhofer and D.M. Mark(Editors), GIScience 2002. Lecture Notes inComputing Science 2478. Springer-Verlag, pp. 22-37.[6] Chandrasekaran, B., Josephson, J.R. and Benjamins,V.R., 1997. Ontology of Tasks and Methods,AAAI Spring Symposium.[7] Egenhofer, M., 2002. Toward the semanticgeospatial web, Tenth ACM InternationalSymposium on Advances in GeographicInformation Systems. ACM Press, New York,NY, USA, McLean, Virginia, USA, pp. 1-4.[8] Fabrikant, S.I. and Buttenfield, B.P., 2001.Formalizing Semantic Spaces for InformationAccess. Annals of the Association of AmericanGeographers, 91(2): 263-280.[9] Federal Geographic Data Committee. FGDC-STD-001-1998. Content standard for digitalgeospatial metadata (revised June 1998). FederalGeographic Data Committee. Washington, D.C.[10] Fonseca, F.T., 2001. Ontology-Driven GeographicInformation Systems. Doctor of PhilosophyThesis, The University of Maine, 131 pp.[11] Fonseca, F.T., Egenhofer, M.J., Jr., C.A.D. andBorges, K.A.V., 2000. Ontologies and knowledgesharing in urban GIS. Computers, Environmentand Urban Systems, 24: 251-271.[12] Fonseca, F.T. and Egenhofer, M.J., 1999.Ontology-Driven Geographic InformationSystems. In: C.B. Medeiros (Editor), 7th ACMSymposium on Advances in GeographicInformation Systems, Kansas City, MO, pp. 7.[13] Gahegan, M., Takatsuka, M., Wheeler, M. andHardisty, F., 2002. Introducing GeoVISTAStudio: an integrated suite of visualization andcomputational methods for exploration andknowledge construction in geography. Computers,Environment and Urban Systems, 26: 267-292.[14] Gahegan, M. N. (1999). Characterizing thesemantic content of geographic data, models, andsystems. In Interoperating GeographicInformation Systems (Eds. Goodchild, M.F.,Egenhofer, M. J. Fegeas, R. and Kottman, C. A.).Boston: Kluwer Academic Publishers, pp. 71-84.[15] Gahegan, M. N. (1996). Specifying thetransformations within and between geographicdata models. Transactions in GIS, Vol. 1, No. 2,pp. 137-152.[16] Guarino, N., 1997a. Semantic Matching: FormalOntological Distinctions for InformationOrganization, Extraction, and Integration. In: M.T.Pazienza (Editor), Information Extraction: AMultidisciplinary Approach to an EmergingInformation Technology. Springer Verlag, pp.139-170.[17] Guarino, N., 1997b. Understanding , building andusing ontologies. International Journal of Human-Computer Studies, 46: 293-310.[18] Hakimpour, F. and Timpf, S., 2002. A Step towardsGeoData Integration using Formal Ontologies. In:M. Ruiz, M. Gould and J. Ramon (Editors), 5thAGILE Conference on Geographic InformationScience. Universitat de les Illes Balears, Palma deMallorca, Spain, pp. 5.[19] Honda, K. and Mizoguchi, F., 1995. Constraint-based approach for automatic spatial layoutplanning. 11thconference on Artificial Intelligencefor Applications, Los Angeles, CA. p38.[20] Kokla, M. and Kavouras, M., 2002. Theories ofConcepts in Resolving Semantic Heterogeneities,5thAGILE Conference on Geographic InformationScience, Palma, Spain, pp. 2.[21] Kuhn, W., 2002. Modeling the Semantics ofGeographic Categories through ConceptualIntegration. In: M.J. Egenhofer and D.M. Mark(Editors), GIScience 2002. Lecture Notes inComputer Science. Springer-Verlag.[22] MacEachren, A.M., in press. An evolvingcognitive-semiotic approach to geographicvisualization and knowledge construction.Information Design Journal.[23] Mark, D., Egenhofer, M., Hirtle, S. and Smith, B.,2002. Ontological Foundations for GeographicInformation Science. UCGIS Emerging ResourceTheme.[24] Pascoe R.T and Penny J.P. (1995) Constructinginterfaces between (and within) geographicalinformation systems. International Journal ofGeographical Information Systems, 9:p275.[25] Pundt, H. and Bishr, Y., 2002. Domain ontologiesfor data sharing–an example from environmentalmonitoring using field GIS. Computers &Geosciences, 28: 95-102.[26] Smith, B. and Mark, D.M., 2001. Geographicalcategories: an ontological investigation.International Journal of GeographicalInformation Science, 15(7): 591-612.[27] Sotnykova, A., 2001. Design and Implementationof Federation of Spatio-Temporal Databases:Methods and Tools, Centre de Recherche Public -Henri Tudor and Laboratoire de Bases de DonneesDatabase Laboratory.[28] Sowa, J. F., 2000, Knowledge Representation:Logical, Philosophical and ComputationalFoundations (USA: Brooks/Cole).[29] Turner, M. and Fauconnier, G., 1998. ConceptualIntegration Networks. Cognitive Science, 22(2):133-187.[30] Visser, U., Stuckenschmidt, H., Schuster, G. andVogele, T., 2002. Ontologies for geographicinformation processing. Computers &Geosciences, 28: 103-117.