A myth or a vision for interoperability: can systems communicate like humans do?

685 views

Published on

Seminar - Interoperability challenges and needs: When Research meets Industry, 3rd June 2013, CRP Henri Tudor

Published in: Technology, Spiritual
2 Comments
5 Likes
Statistics
Notes
No Downloads
Views
Total views
685
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
0
Comments
2
Likes
5
Embeds 0
No embeds

No notes for slide

A myth or a vision for interoperability: can systems communicate like humans do?

  1. 1. A myth or a vision forinteroperability: can systemscommunicate like humans do?dr Milan ZdravkovićLaboratory for Intelligent Production Systems (LIPS)Faculty of Mechanical Engineering in Niš, University of Niš,SerbiaSeminar - Interoperability challenges and needs: When Research meets Industry, 3rd June2013, CRP Henri Tudor, Luxembourg
  2. 2. Statement of the problem• Motivation– In the future IoT, every “thing” will be a system• More complexity, less previous agreements and assumptions• Can one system operate based on the message(s) ofthe arbitrary content, sent by the (an)other(unknown) system(s)?– It is the problem of systems interoperability, not data,enterprise, etc..– How to represent that content and how to reason basedon that content?Artificial intelligence
  3. 3. Illusion of / artificial intelligence• Turing test (Turing, 1950)– Test of a machines ability toexhibit intelligent behaviorequivalent to, orindistinguishable from, that ofan actual human– Turing reduced the problem ofdefining intelligence to a simpleconversation• Example: ELIZA– Examines users’ comments forkeywordsPARRY
  4. 4. If more specific context for illusion isprovided, odds are getting better• PARRY (1972) attempted to model thebehavior of a paranoid schizophrenic• Easily passed the Turing test (evaluated bypsychiatrists)– correct identification only 48 per cent of the time- a figure consistent with random guessingChinese room experiment
  5. 5. Chinese room experiment• Proves that Turing test couldnot be used to determine if amachine can think• The experiment is thecenterpiece of SearlesChinese room argumentwhich holds that a programcannot give a computer a"mind", "understanding" or"consciousness", regardlessof how intelligently it maymake it behaveChinese room argument
  6. 6. Chinese room argument• Axioms– (A1) "Programs are formal (syntactic)."– (A2) "Minds have mental contents (semantics)."– (A3) "Syntax by itself is neither constitutive of norsufficient for semantics.“• Problem: What experiment show is that passing a Turing test ispossible without understanding. It does not show that its notpossible to reconstruct or interpret semantics based on the syntax• Conclusion– (C1) Programs are neither constitutive of nor sufficient forminds.Do systems need to “understand”?
  7. 7. OK, systems can(not) be intelligent(can(not) understand), but is that reallyimportant?• Turing test is explicitly anthropomorphic– If our ultimate goal is to create machines thatare more intelligent than people, why should we insistthat our machines must closely resemble people?– Russell and Norvig: “the goal of aeronautical engineering isnot to make machines that fly exactly like pigeons becausethey need to fool other pigeons”• For example, DL is somewhat close to the knowledgerepresentation in our minds. But, could it be possible thatknowledge may be represented (or reasoning implemented) insome other way, by using other kinds of formalisms (not yetexisting)?Functionalism
  8. 8. Functionalism (Putnam, 1960)• Mental states (beliefs, desires, being in pain, etc.) areconstituted solely by their functional role - that is,they are causal relations to other mental states,sensory inputs, and behavioral outputs• Mental states are able to be manifested in varioussystems, even perhaps computers, so long as thesystem performs the appropriate functions– Mental states can be sufficiently explained without takinginto account the underlying physical medium• Computational theory of mind (Putnam, 1961)– the mind is a machine that derives output representationsof the world from input representations in a deterministic(non-random) and formal (non-semantic) wayReverse Chinese Room argument
  9. 9. Reverse Chinese Room argument• There may exist a system which whenprovided with detailed instructions on how tointerpret “sensory inputs”, could be able toproduce corresponding (reasonable)“behavioral outputs”, or a “mental state”whatsoever.Sensory inputs? Behavioral outputsDefinition of interoperability
  10. 10. My favorite definition (of MANY) ofinteroperability• ISO/IEC 2382 defines interoperability as the• “capability (of the agent) to communicate,execute programs, or transfer data amongvarious functional units in a manner thatrequires the user (agent) to have little or noknowledge of the unique characteristics ofthose units”.Sensation
  11. 11. Sensation• Senses are physiological capacities of organisms thatprovide data for perception• However, perception andsensation cannot beconsidered in isolation,because of the filtering(selection), organizing(grouping, categorization)even interpretationprocesses– organizing various stimuli intomore meaningful patternsChecker shadow illusion
  12. 12. Perception• Brains perceptual systemsactively and pre-consciouslyattempt to make sense of theirinputDistal stimulus(object)Input energy SenseTransductionProximal stimulus(object)Pattern ofneural activityProcessingPerceptMental recreation ofthe distal stimulusPerceptual set
  13. 13. Perceptual set (expectancy)• Predisposition to perceive thingsin a certain way– Experience, expectation,motivation• Sensations are, by themselves,unable to provide a uniquedescription of the world– Perception is both bottom-up(senses) and top-down (perceptualset) process• Perceptual bias (negative)– Epistemic commitment– For example, referee decisions in afootball gamesaelsealsailGroupingInterpretation of non-wordby using different perceptualsets
  14. 14. Could perception be formalized?Gestalt laws of grouping (1923)• Laws that, hypothetically, allow us to predictthe interpretation of sensation– We tend to order our experience in a manner thatis regular, orderly, symmetric, and simple.– A major aspect of Gestalt psychology is that itimplies that the mind understands external stimulias whole rather than the sum of their parts• Grouping by proximity, similarity, complementarity(closure), symmetry, continuity, etc.Cognition
  15. 15. Cognition• How we know the world– The term "cognition" refers to all processes bywhich the sensory input is transformed, reduced,elaborated, stored, recovered, and used.• Include processes, such as memory,association, concept formation, patternrecognition, attention, perception, problemsolving, mental imaginery,..Concept learning
  16. 16. Concept learning• Bruner (1967): "the search for and listing ofattributes that can be used to distinguishexemplars from non exemplars of variouscategories.“– Trial-and-error– Based on applied perception rules (not onlyidentification, but also assumption)• Explanation-based theory of concept learning– Mind observes or receives the qualities of a thing,– Then, it forms a concept which possesses and isidentified by those qualities– Derived from theory of progressive generalizing(1986)• the mind separates information that applies tomore than one thing and enters it into a broaderdescription of a category of things. This is done byidentifying sufficient conditions for something to fitin a categoryIntensional conceptualization
  17. 17. Intensional conceptualization• Logical positivists: meaning is nothing more or lessthan the truth conditions it involves.• Here, the meaning is explained by using thereferences to the actual existing (possibly alsologically explained) things in the world– By using not only necessary but also sufficient conditions• The process of the representation of such meaningsis called intensional conceptualization.Meaning in linguistics
  18. 18. Meaning in linguistics• Meaning is what the sender expresses,communicates or conveys in its message to thereceiver (or observer) and what the receiver infersfrom the current context (Akmajian et al, 1995)– Different contexts -> different interpretations– Linguistics context• how meaning is understood, without relying on intent andassumptions• Depend on the expressivity of vocabulary and level of abstraction– Situational context• refers to non-linguistic factors which affect the meaning of themessageDefinition of systems interoperability
  19. 19. Formalized systems interoperability(based on Sowa, 2000)data(p) ∧ system(S) ∧ system(R) ∧interoperable(S,R) ⇒∀p((transmitted-from(p,S) ∧ transmitted-to(p,R))∧∀q(statement-of(q,S) ∧ p⇒q) ∃q’(statement-of(q’,R) ∧ p⇒q’ ∧ q’⇔q))Summary of human communication process
  20. 20. Human communication as a raw model forinteroperabilitySensationSensationPerceptionPerceptionCognitionCognition ArticulationArticulationProviding meaning tovarious sensationsIn contexts ofperceptualsets:motivation,expectations,experience,culture, etc.Gainingknowledge andcomprehensionfrom thesensationsStorage, reasoning,problem solving, imagining,concept learningStimulussensory energyArticulatingresponseRecipients,language, means
  21. 21. SensationPerceptionCognition Articulation∃R(system(R))Requirements for interoperabilitySensation PerceptionCognitionArticulation• Sensation– “Ask” & “Tell” interface• Perception– Grouping, categorization andselection Laws: Semanticmatching and reasoning– Perceptual set• Explicit knowledge(ontologies)• Motivation?WebservicesOntologiesQueryprocessingSemanticmatchingReasoner• Cognition– Triple store– Formalized business rules– Rules-enabled reasoning(generalization andspecialization)– Assertion of newknowledgeOntologiesMappings ∃S(system(S))∀p ((transmitted-from(p,S) transmitted-to(p,R))∧ ∧∀q(statement-of(q,S) p q)∧ ⇒∃q’(statement-of(q’,R) p q’ q’ q)∧ ⇒ ∧ ⇔) ⇒ interoperable(S,R)Implementation of interoperable systems
  22. 22. CnC1C2Implementing interoperable systemsOL1OD1OL2ML1D1ML2D1MO1O2≡f(ML1D1 , ML2D1)S1S2MLnD1SnOLnMO1On≡f(ML1D1 , MLnD1)OD2SiOLiMLiD2MD1D2MO1Oi≡f(ML1D1 , MD1D2, MLiD2)• S1-Sn – Enterprise InformationSystems• OL1-OL2 – Local ontologies• OD1,2 – Domain ontologies• MLiDi – Mappings between localand domain ontologies“Human” ontology
  23. 23. What makes the good (“human”)ontology (1/2) ?• Well-defined scope– Provided context for communication, by set ofcompetency questions– Think of ontology as a perceptual set• In situational (motivation) and linguistics (expressivity, related todomain(s)) context– More implicit, the better ?• Intensional approach to conceptualization– Remember explanation-based theory of concept learning?• Epistemic commitment– Obligation to uphold the factual truth of a givenproposition and to provide reasons for one’s belief in thatproposition, irrespectively of the context
  24. 24. What makes the good (“human”)ontology (2/2) ?• Taxonomy– Referring to “internal” or “external” concepts– Remember progressive generalization?• No ontology is an island– Mappings with the concepts of other ontologies inhorizontal (expressivity) and vertical (level of abstraction)direction• Meta-ontologies– Complement the DL expressivity with new representationand inference methodologies and strategies“Human” ontology continuum
  25. 25. “Human” ontology continuumScope of abstractionExpressivitySituationalawarenessCompleteness
  26. 26. Some future challenges• Methodology issues– Semantic vs. semantically-facilitatedinteroperability– Avoiding Yet-Another-Ontology (YAO) syndrome– Is expressivity of DL sufficient to facilitate efficientand effective reasoning and/or semanticmatching?• Technical issues– How to make systems and local ontologies worktogether?
  27. 27. Thank you for your attentiondr Milan Zdravkovićmilan.zdravkovic@gmail.comhttp://www.masfak.ni.ac.rs/milan.zdravkovicLaboratory for Intelligent Production Systems (LIPS)Faculty of Mechanical Engineering in Niš, University of Niš,Serbia

×