The document discusses bridging the gap between PR-OWL and OWL semantics. It introduces probabilistic ontologies which include types of entities, properties, relationships, processes/events, statistical regularities, and uncertainty. It describes how the mapping between PR-OWL and OWL is currently incomplete and proposes solutions to better represent binary and n-ary relations between concepts.
Presentation given by Rommel N. Carvalho at the 9th International Workshop on Uncertainty Reasoning for the Semantic Web at the 12th International Semantic Web Conference in October 21, 2013, Sydney, Australia. This was a joint work between the Research and Strategic Information Directorate from Brazil's Office of the Comptroller General and the Department of Computer Science from the University of Brasília.
Title: A GUI for MLN.
Abstract: This paper focuses on the incorporation of the Markov Logic Network (MLN) formalism as a plug-in for UnBBayes, a Java framework for probabilistic reasoning based on graphical models. MLN is a formalism for probabilistic reasoning which combines the capacity of dealing with uncertainty tolerating imperfections and contradictory knowledge based a Markov Network (MN) with the expressiveness of First Order Logic. A MLN provides a compact language for specifying very large MNs and the ability to incorporate, in modular form, large domain of knowledge (expressed in First Order Logic sentences) inside itself. A Graphical User Interface for the software Tuffy was implemented into UnBBayes to facilitate the creation, and inference of MLN models. Tuffy is a Java open source MLN engine.
UniDL 2010 - Compatibility Formalization Between PR-OWL and OWLRommel Carvalho
Presentation given by Rommel Carvalho at the First International Workshop on Uncertainty in Description Logics (UniDL) on Federated Logic Conference (FLoC) in 20 July 2010.
PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabili...Rommel Carvalho
Presentation given by Saminda Abeyruwan at the 6th Uncertainty Reasoning for the Semantic Web Workshop at the 9th International Semantic Web Conference in November 7, 2010.
Paper: PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabilistic Methods
Abstract: Formalizing an ontology for a domain manually is well-known as a tedious and cumbersome process. It is constrained by the knowledge acquisition bottleneck. Therefore, researchers developed algorithms and systems that can help to automatize the process. Among them are systems that include text corpora for the acquisition. Our idea is also based on vast amount of text corpora. Here, we provide a novel unsupervised bottom-up ontology generation method. It is based on lexico-semantic structures and Bayesian reasoning to expedite the ontology generation process. We provide a quantitative and two qualitative results illustrating our approach using a high throughput screening assay corpus and two custom text corpora. This process could also provide evidence for domain experts to build ontologies based on top-down approaches.
Tractability of the Crisp Representations of Tractable Fuzzy Description LogicsRommel Carvalho
Presentation given by Fernando Bobillo at the 6th Uncertainty Reasoning for the Semantic Web Workshop at the 9th International Semantic Web Conference in 2010.
Paper: Tractability of the Crisp Representations of Tractable Fuzzy Description Logics
Abstract: An important line of research within the field of fuzzy DLs is the computation of an equivalent crisp representation of a fuzzy ontology. In this short paper, we discuss the relation between tractable fuzzy DLs and tractable crisp representations. This relation heavily depends on the family of fuzzy operators considered.
Default Logics for Plausible Reasoning with Controversial AxiomsRommel Carvalho
Presentation given by Thomas Scharrenbach at the 6th Uncertainty Reasoning for the Semantic Web Workshop at the 9th International Semantic Web Conference in 2010.
Paper: Default Logics for Plausible Reasoning with Controversial Axioms
Abstract: Using a variant of Lehmann's Default Logics and Probabilistic Description Logics we recently presented a framework that invalidates those unwanted inferences that cause concept unsatisfiability without the need to remove explicitly stated axioms. The solutions of this methods were shown to outperform classical ontology repair w.r.t. the number of inferences invalidated. However, conflicts may still exist in the knowledge base and can make reasoning ambiguous. Furthermore, solutions with a minimal number of inferences invalidated do not necessarily minimize the number of conflicts. In this paper we provide an overview over finding solutions that have a minimal number of conflicts while invalidating as few inferences as possible. Specifically, we propose to evaluate solutions w.r.t. the quantity of information they convey by recurring to the notion of entropy and discuss a possible approach towards computing the entropy w.r.t. an ABox.
Modeling a Probabilistic Ontology for Maritime Domain AwarenessRommel Carvalho
The document describes developing a probabilistic ontology for maritime domain awareness. It aims to develop an ontology capable of reasoning with evidence from different domains to provide situational awareness. It discusses ontologies, probabilistic ontologies, and using the Probabilistic Web Ontology Language and other techniques. It also presents an uncertainty modeling process and incremental methodology for modeling the probabilistic ontology, including modeling cycles with goals, queries, evidence and assumptions.
Presentation given by Rommel N. Carvalho at the 9th International Workshop on Uncertainty Reasoning for the Semantic Web at the 12th International Semantic Web Conference in October 21, 2013, Sydney, Australia. This was a joint work between the Research and Strategic Information Directorate from Brazil's Office of the Comptroller General and the Department of Computer Science from the University of Brasília.
Title: UMP-ST plug-in: a tool for documenting, maintaining, and evolving probabilistic ontologies.
Abstract: Although several languages have been proposed for dealing with uncertainty in the Semantic Web (SW), almost no support has been given to ontological engineers on how to create such probabilistic ontologies (PO). This task of modeling POs has proven to be extremely difficult and hard to replicate. This paper presents the first tool in the world to implement a process which guides users in modeling POs, the Uncertainty Modeling Process for Semantic Technologies (UMP-ST). The tool solves three main problems: the complexity in creating POs; the difficulty in maintaining and evolving existing POs; and the lack of a centralized tool for documenting POs. Besides presenting the tool, which is implemented as a plug-in for UnBBayes, this papers also presents how the UMP-ST plug-in could have been used to build the Probabilistic Ontology for Procurement Fraud Detection and Prevention in Brazil, a proof-of-concept use case created as part of a research project at the Brazilian Office of the Comptroller General (CGU).
The document discusses semantic interoperability within a company. It describes several tools that can be used to describe and structure semantics, including ontologies, tagging, classifications, and taxonomies. It provides examples of how these tools can be applied at an enterprise level, including enterprise ontologies, tag clouds, the Zachman framework, and IBM's Information Framework.
Presentation given by Rommel N. Carvalho at the 9th International Workshop on Uncertainty Reasoning for the Semantic Web at the 12th International Semantic Web Conference in October 21, 2013, Sydney, Australia. This was a joint work between the Research and Strategic Information Directorate from Brazil's Office of the Comptroller General and the Department of Computer Science from the University of Brasília.
Title: A GUI for MLN.
Abstract: This paper focuses on the incorporation of the Markov Logic Network (MLN) formalism as a plug-in for UnBBayes, a Java framework for probabilistic reasoning based on graphical models. MLN is a formalism for probabilistic reasoning which combines the capacity of dealing with uncertainty tolerating imperfections and contradictory knowledge based a Markov Network (MN) with the expressiveness of First Order Logic. A MLN provides a compact language for specifying very large MNs and the ability to incorporate, in modular form, large domain of knowledge (expressed in First Order Logic sentences) inside itself. A Graphical User Interface for the software Tuffy was implemented into UnBBayes to facilitate the creation, and inference of MLN models. Tuffy is a Java open source MLN engine.
UniDL 2010 - Compatibility Formalization Between PR-OWL and OWLRommel Carvalho
Presentation given by Rommel Carvalho at the First International Workshop on Uncertainty in Description Logics (UniDL) on Federated Logic Conference (FLoC) in 20 July 2010.
PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabili...Rommel Carvalho
Presentation given by Saminda Abeyruwan at the 6th Uncertainty Reasoning for the Semantic Web Workshop at the 9th International Semantic Web Conference in November 7, 2010.
Paper: PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabilistic Methods
Abstract: Formalizing an ontology for a domain manually is well-known as a tedious and cumbersome process. It is constrained by the knowledge acquisition bottleneck. Therefore, researchers developed algorithms and systems that can help to automatize the process. Among them are systems that include text corpora for the acquisition. Our idea is also based on vast amount of text corpora. Here, we provide a novel unsupervised bottom-up ontology generation method. It is based on lexico-semantic structures and Bayesian reasoning to expedite the ontology generation process. We provide a quantitative and two qualitative results illustrating our approach using a high throughput screening assay corpus and two custom text corpora. This process could also provide evidence for domain experts to build ontologies based on top-down approaches.
Tractability of the Crisp Representations of Tractable Fuzzy Description LogicsRommel Carvalho
Presentation given by Fernando Bobillo at the 6th Uncertainty Reasoning for the Semantic Web Workshop at the 9th International Semantic Web Conference in 2010.
Paper: Tractability of the Crisp Representations of Tractable Fuzzy Description Logics
Abstract: An important line of research within the field of fuzzy DLs is the computation of an equivalent crisp representation of a fuzzy ontology. In this short paper, we discuss the relation between tractable fuzzy DLs and tractable crisp representations. This relation heavily depends on the family of fuzzy operators considered.
Default Logics for Plausible Reasoning with Controversial AxiomsRommel Carvalho
Presentation given by Thomas Scharrenbach at the 6th Uncertainty Reasoning for the Semantic Web Workshop at the 9th International Semantic Web Conference in 2010.
Paper: Default Logics for Plausible Reasoning with Controversial Axioms
Abstract: Using a variant of Lehmann's Default Logics and Probabilistic Description Logics we recently presented a framework that invalidates those unwanted inferences that cause concept unsatisfiability without the need to remove explicitly stated axioms. The solutions of this methods were shown to outperform classical ontology repair w.r.t. the number of inferences invalidated. However, conflicts may still exist in the knowledge base and can make reasoning ambiguous. Furthermore, solutions with a minimal number of inferences invalidated do not necessarily minimize the number of conflicts. In this paper we provide an overview over finding solutions that have a minimal number of conflicts while invalidating as few inferences as possible. Specifically, we propose to evaluate solutions w.r.t. the quantity of information they convey by recurring to the notion of entropy and discuss a possible approach towards computing the entropy w.r.t. an ABox.
Modeling a Probabilistic Ontology for Maritime Domain AwarenessRommel Carvalho
The document describes developing a probabilistic ontology for maritime domain awareness. It aims to develop an ontology capable of reasoning with evidence from different domains to provide situational awareness. It discusses ontologies, probabilistic ontologies, and using the Probabilistic Web Ontology Language and other techniques. It also presents an uncertainty modeling process and incremental methodology for modeling the probabilistic ontology, including modeling cycles with goals, queries, evidence and assumptions.
Presentation given by Rommel N. Carvalho at the 9th International Workshop on Uncertainty Reasoning for the Semantic Web at the 12th International Semantic Web Conference in October 21, 2013, Sydney, Australia. This was a joint work between the Research and Strategic Information Directorate from Brazil's Office of the Comptroller General and the Department of Computer Science from the University of Brasília.
Title: UMP-ST plug-in: a tool for documenting, maintaining, and evolving probabilistic ontologies.
Abstract: Although several languages have been proposed for dealing with uncertainty in the Semantic Web (SW), almost no support has been given to ontological engineers on how to create such probabilistic ontologies (PO). This task of modeling POs has proven to be extremely difficult and hard to replicate. This paper presents the first tool in the world to implement a process which guides users in modeling POs, the Uncertainty Modeling Process for Semantic Technologies (UMP-ST). The tool solves three main problems: the complexity in creating POs; the difficulty in maintaining and evolving existing POs; and the lack of a centralized tool for documenting POs. Besides presenting the tool, which is implemented as a plug-in for UnBBayes, this papers also presents how the UMP-ST plug-in could have been used to build the Probabilistic Ontology for Procurement Fraud Detection and Prevention in Brazil, a proof-of-concept use case created as part of a research project at the Brazilian Office of the Comptroller General (CGU).
The document discusses semantic interoperability within a company. It describes several tools that can be used to describe and structure semantics, including ontologies, tagging, classifications, and taxonomies. It provides examples of how these tools can be applied at an enterprise level, including enterprise ontologies, tag clouds, the Zachman framework, and IBM's Information Framework.
This document provides an overview of ontologies and the semantic web. It defines ontologies as formal specifications of conceptualizations that are shared between people and computers. Ontologies provide a common vocabulary and conceptual structure to facilitate understanding between humans and machines. They allow different systems and communities to work together by providing shared definitions of concepts and relationships. The development of ontologies and the semantic web aims to make web resources more computer-readable and enable machines to better understand and process online information.
The document discusses ontology from philosophical and computer science perspectives. In philosophy, ontology is the science of being and investigates categories of things that exist. In computer science, an ontology is an explicit specification of a conceptualization - the objects, relations, and other entities that are presumed to exist in some area of interest. It defines the types, properties, and interrelationships of the entities. The document contrasts ontologies with other concepts like conceptual schemas, knowledge bases, and classifications. It also discusses challenges in ontology engineering like balancing domain independence with application dependencies.
After tolerance, post-cartesian politics require reconsidering who and what constitutes subjects of politics. The document discusses including those traditionally excluded like refugees, amateurs, and "dead labor" in governance and design. It raises ethical questions about what kind of subjects we want to become and what good a renewed polity should achieve. A renewed political economy must balance mediation, order, privation, and communication to resolve problems of distribution, temporalities, and order in a post-kantian cosmopolitan way.
Transmission Of Multimedia Data Over Wireless Ad-Hoc NetworksJan Champagne
This document discusses a cross-layer service discovery mechanism for OLSRv2 mobile ad hoc networks. It proposes using the OLSRv2 routing protocol to disseminate service advertisements across the network. When a node has a service to advertise, it includes the information in its OLSRv2 control messages which are then flooded to all nodes. Nodes can then lookup services of interest directly from the routing table entries without needing to run a separate service discovery protocol. This integrated approach leverages the existing routing structure to provide service discovery while minimizing overhead.
Wai March 2009 Representing Legal Knowledge On The Semantic WebRinke Hoekstra
The document discusses representing legal knowledge on the semantic web. It outlines challenges in legal knowledge representation and proposes an incremental approach using a legal core ontology to represent basic legal concepts. It describes representing norms in OWL DL and addressing issues like exceptions, temporal aspects, and jurisdiction. Representing norms as institutional facts that impose qualifications on situations is discussed, along with addressing complexities like conflicting norms, lex specialis, and temporal scope.
Building shared and reusable ontologies, both in an operational and more conceptual sense, needs precise definition of system of interest, classification of its relations by means of topological analysis, and explanation of the concepts through mereological tools (for example decomposition of an object in its parts, or a class in its subclasses). Our work presents an attempt to apply these procedures to urban systems, beginning from the corpus of theories developed in urban system analysis to achieve an ontology of the city with the already mentioned suitable features, underlining in particular three levels (physical, socio-economical, and mental level) through which it’s possible to observe the city.
The document discusses the need for ontologies and conceptual modeling in the financial derivatives industry. It notes that a 2010 law requires regulators to study using standardized computer-readable descriptions of derivatives. These descriptions could serve as legal definitions if together with standardized definitions. The document suggests that ontologies may help define and describe derivatives transactions and positions, and questions how financial institutions currently approach this. It asks about what ontologies are used, how they are maintained and extended to cover new products, and their scope and limitations.
This document summarizes the first meeting of the Knowledge Representation seminar at Kings College London in June 2010. It discusses ontologies from three perspectives:
1) The theoretical perspective defines ontologies and discusses different definitions.
2) The pragmatic perspective explains what ontologies are used for.
3) The design perspective outlines how to build ontologies and discusses components like logic, ontology, and computation.
The document also covers topics like the differences between ontologies and data models or knowledge bases, degrees of "ontological depth", upper vs. domain ontologies, examples of top-level ontologies, and realist vs. conceptualist perspectives on ontologies.
Information extraction involves extracting structured information from unstructured text. The goal is to identify named entities, relations between entities, and populate a database. This may also include event extraction, resolving temporal expressions, and wrapper induction. Common tasks include named entity recognition, co-reference resolution, relation extraction, and event extraction. Statistical methods like conditional random fields are often used. Evaluation involves measuring precision and recall.
ISSS Language-Action Perspective BasicsPeter Jones
The document discusses the Language/Action Perspective and its key concepts, history, and applications. It covers:
- Colin Cherry's definition of communication as the exchange of meanings between social participants to create understanding.
- The Language-Action Perspective conceived of conversations as coordinating actions between individuals through requests, agreements, and accounting for promises.
- Software embodiments of the Language-Action Perspective included the Coordinator system in 1986 and Orchestrator Mail in 2010 to facilitate conversations for action through commitments.
The Role Of Ontology In Modern Expert Systems Dallas 2008Jason Morris
The document discusses the role of ontologies in modern expert system development. It provides background on expert systems and ontologies, explaining that ontologies define domains of knowledge and are used to encapsulate domain knowledge for use in expert systems. The document outlines the process of developing ontologies, including identifying concepts and relationships in a domain. It also provides an example of an expert system called SINFERS that uses ontologies to select soil property prediction models.
THEORY & REVIEWTHEORIZING THE DIGITAL OBJECT1Philip Fa.docxsusannr
THEORY & REVIEW
THEORIZING THE DIGITAL OBJECT1
Philip Faulkner
Clare College, University of Cambridge,
Cambridge, CB2 1TL, UNITED KINGDOM {[email protected]}
Jochen Runde
Cambridge Judge Business School and Girton College, University of Cambridge,
Cambridge, CB2 1AG, UNITED KINGDOM {[email protected]}
Prompted by perceived shortcomings of prevailing conceptualizations of digital technology in IS, we propose
a theory aimed at capturing both the ontological complexity of digital objects qua objects, and how their iden-
tity and use is bound up with various social associations. We begin with what it is to be an object, the dif-
ferences between material and nonmaterial objects, and various categories of nonmaterial objects including
syntactic objects and bitstrings. Building on these categories we develop a conception of digital objects and
a novel “bearer” theory of how material and nonmaterial objects combine. The role of computation is con-
sidered, and how the identity and system functions of digital objects flow from their social positioning in the
communities in which they arise. Various implications of the theory are identified, focusing on its use as a
conceptual frame through which to view digital phenomena, and its potential to inform existing perspectives
with regard both to how digital technology per se and the relationship between people and digital technology
should be theorized. These implications are illustrated with reference to secondary markets for software, the
treatment of digital resources in the resource-based, knowledge-based, and service-dominant logic views of
organizing, and recent work on sociomateriality.
Keywords: Nonmaterial objects, digital objects, bitstrings, digital technology, social positions, resources,
resource-based view, service-dominant logic, sociomateriality, imbrication
Introduction 1
One of the striking features of the digital revolution has been
the proliferation of what we will call digital objects, many of
which have transformed and become indispensable parts of
organizational life. Digital objects feature prominently in IS
research and include computer systems and peripherals (Hib-
beln et al. 2017; Xu et al. 2017), smart devices (Prasopoulou
2017; Yoo 2010), mobile apps (Boudreau 2012; Claussen et
al. 2013; Hoehle and Venkatesh 2015), emails (Barley et al.
2011; Wang et al. 2016), blogs (Aggarwal et al. 2012; Chau
and Xu 2012; Luo et al. 2017), electronic health records
(Kohli and Tan 2016), online videos (Kallinikos and Mariá-
tegui, 2011; Susarla et al. 2012), 3D printers (Kyriakou et al.
2017), and enterprise systems (Strong and Volkoff 2010;
Sykes 2015).
Illuminating as these and similar studies invariably are,
however, their principal focus is on the human and organi-
zational implications of the technology in question rather than
on the devices themselves. The result is that research of this
kind tends to invoke “pretheoretical understandings” (Ekbia
2009, p. 2555) o.
The document discusses developing ontologies, including:
1. What an ontology is and different types of ontologies such as taxonomies, thesauri, and reference models.
2. Representing ontologies using knowledge representation formalisms that have evolved from semantic networks to description logics.
3. The Semantic Web ontology language OWL, which extends RDFS and is divided into three species with different levels of expressivity.
11.vol 0003www.iiste.org call for paper no 2 pp 143-159Alexander Decker
This document summarizes an article that examines the governance of humanitarian projects in Southeast India after the 2004 tsunami. It discusses how notions of transparency and accountability are ambiguous when applied to complex cross-cultural projects implemented at a local level. The article uses insights from Actor Network Theory to analyze how "translation effects" and minor inconsistencies can paradoxically help institutionalize projects and amplify their impact, despite not being captured by formal management controls. It explores how ANT concepts like "front lines" and "computing centers" can provide understanding of surprises experienced in such projects.
The document discusses what a knowledge representation is. It argues that a knowledge representation plays five distinct roles:
1. It acts as a surrogate for real-world entities, allowing reasoning to be done internally rather than through direct interaction. All representations are imperfect surrogates.
2. It embodies a set of ontological commitments about how to conceptualize the world. Selecting a representation means deciding what aspects to focus on and what to ignore.
3. It provides a fragmentary theory of intelligent reasoning, specifying what inferences are sanctioned and recommended.
4. It serves as a pragmatic and efficient computational environment for thinking.
5. It acts as a medium for human expression, a language
These three interdisciplinary creative projects show how value is created:
1) Sensory Threads explored biosensors and soundscapes through playful experimentation.
2) Gesture workshops captured motion data to connect technology and art.
3) MUPPITS is developing a new business model for film post-production using a systemic approach.
Ongoing research examines identifying, articulating, and stabilizing emergent business models and anticipating value creation through constant novelty and experimentation. Theories of emergence may help understand innovation and value.
Fiorella De Cindio, What after protests? Design issues and software tools tow...dmcolab
The document discusses issues around designing digital tools to support deliberative democracy after protests. It notes that both grassroots movements and public institutions often mistakenly use existing platforms like Facebook and Twitter rather than designing purpose-built tools. Well-designed tools are needed to help shaped shaped by citizen movements and engage citizens in a sustainable way. The empirical nature of informatics involves studying real-world phenomena, explaining them through models and theories, and validating those models through experiments and case studies in real-life settings.
The document discusses theories of objects from STS scholars like Heidegger, Ihde, and Latour, and how they can inform our understanding of objects in an Internet of Things world. It explores how STS theories view the composition, position, and relations of objects. The researchers are working on a model and application to implement objects in IoT, and are drawing on lessons from these theorists about how technologies mediate our relations with objects and environments. They are still working through questions about how users can define, share, and search for IoT objects.
This document discusses the state-of-the-art of Internet of Things (IoT) ontologies. It begins by defining ontology and describing important design criteria for ontologies including clarity, coherence, extendibility, and minimal encoding bias. It then discusses the challenges of IoT, including large scale networks, deep heterogeneity, and unknown topology. Several existing IoT ontologies are described, including SWAMO, MMI Device Ontology, and SSN. The document concludes that while no single global IoT ontology currently exists, ontologies are needed to address the semantic interoperability challenges of heterogeneous IoT devices and domains.
Ouvidoria de Balcão vs Ouvidoria Digital: Desafios na Era Big DataRommel Carvalho
Apresentação realizada no dia 14/03/2017 por Rommel N. Carvalho na Semana de Ouvidoria e Acesso à Informação de 2017, organizada pela CGU.
YouTube: https://youtu.be/vNMtULu5X1c?t=3h20m24s
Como transformar servidores em cientistas de dados e diminuir a distância ent...Rommel Carvalho
Palestra ministrada pelo Dr. Rommel Novaes Carvalho, Coordenador-Geral do Observatório da Despesa Pública e Professor do Mestrado Profissional em Computação Aplicada da UnB.
Evento: Brasil 100% Digital: Integração e transparência a serviço da sociedade
Website: http://www.brasildigital.gov.br/
Data: 10/11/2016
Vídeo: https://www.youtube.com/watch?v=3WYQlPR-RLw&feature=youtu.be&t=2h4m44s
More Related Content
Similar to PR-OWL 2.0 - Bridging the gap to OWL semantics
This document provides an overview of ontologies and the semantic web. It defines ontologies as formal specifications of conceptualizations that are shared between people and computers. Ontologies provide a common vocabulary and conceptual structure to facilitate understanding between humans and machines. They allow different systems and communities to work together by providing shared definitions of concepts and relationships. The development of ontologies and the semantic web aims to make web resources more computer-readable and enable machines to better understand and process online information.
The document discusses ontology from philosophical and computer science perspectives. In philosophy, ontology is the science of being and investigates categories of things that exist. In computer science, an ontology is an explicit specification of a conceptualization - the objects, relations, and other entities that are presumed to exist in some area of interest. It defines the types, properties, and interrelationships of the entities. The document contrasts ontologies with other concepts like conceptual schemas, knowledge bases, and classifications. It also discusses challenges in ontology engineering like balancing domain independence with application dependencies.
After tolerance, post-cartesian politics require reconsidering who and what constitutes subjects of politics. The document discusses including those traditionally excluded like refugees, amateurs, and "dead labor" in governance and design. It raises ethical questions about what kind of subjects we want to become and what good a renewed polity should achieve. A renewed political economy must balance mediation, order, privation, and communication to resolve problems of distribution, temporalities, and order in a post-kantian cosmopolitan way.
Transmission Of Multimedia Data Over Wireless Ad-Hoc NetworksJan Champagne
This document discusses a cross-layer service discovery mechanism for OLSRv2 mobile ad hoc networks. It proposes using the OLSRv2 routing protocol to disseminate service advertisements across the network. When a node has a service to advertise, it includes the information in its OLSRv2 control messages which are then flooded to all nodes. Nodes can then lookup services of interest directly from the routing table entries without needing to run a separate service discovery protocol. This integrated approach leverages the existing routing structure to provide service discovery while minimizing overhead.
Wai March 2009 Representing Legal Knowledge On The Semantic WebRinke Hoekstra
The document discusses representing legal knowledge on the semantic web. It outlines challenges in legal knowledge representation and proposes an incremental approach using a legal core ontology to represent basic legal concepts. It describes representing norms in OWL DL and addressing issues like exceptions, temporal aspects, and jurisdiction. Representing norms as institutional facts that impose qualifications on situations is discussed, along with addressing complexities like conflicting norms, lex specialis, and temporal scope.
Building shared and reusable ontologies, both in an operational and more conceptual sense, needs precise definition of system of interest, classification of its relations by means of topological analysis, and explanation of the concepts through mereological tools (for example decomposition of an object in its parts, or a class in its subclasses). Our work presents an attempt to apply these procedures to urban systems, beginning from the corpus of theories developed in urban system analysis to achieve an ontology of the city with the already mentioned suitable features, underlining in particular three levels (physical, socio-economical, and mental level) through which it’s possible to observe the city.
The document discusses the need for ontologies and conceptual modeling in the financial derivatives industry. It notes that a 2010 law requires regulators to study using standardized computer-readable descriptions of derivatives. These descriptions could serve as legal definitions if together with standardized definitions. The document suggests that ontologies may help define and describe derivatives transactions and positions, and questions how financial institutions currently approach this. It asks about what ontologies are used, how they are maintained and extended to cover new products, and their scope and limitations.
This document summarizes the first meeting of the Knowledge Representation seminar at Kings College London in June 2010. It discusses ontologies from three perspectives:
1) The theoretical perspective defines ontologies and discusses different definitions.
2) The pragmatic perspective explains what ontologies are used for.
3) The design perspective outlines how to build ontologies and discusses components like logic, ontology, and computation.
The document also covers topics like the differences between ontologies and data models or knowledge bases, degrees of "ontological depth", upper vs. domain ontologies, examples of top-level ontologies, and realist vs. conceptualist perspectives on ontologies.
Information extraction involves extracting structured information from unstructured text. The goal is to identify named entities, relations between entities, and populate a database. This may also include event extraction, resolving temporal expressions, and wrapper induction. Common tasks include named entity recognition, co-reference resolution, relation extraction, and event extraction. Statistical methods like conditional random fields are often used. Evaluation involves measuring precision and recall.
ISSS Language-Action Perspective BasicsPeter Jones
The document discusses the Language/Action Perspective and its key concepts, history, and applications. It covers:
- Colin Cherry's definition of communication as the exchange of meanings between social participants to create understanding.
- The Language-Action Perspective conceived of conversations as coordinating actions between individuals through requests, agreements, and accounting for promises.
- Software embodiments of the Language-Action Perspective included the Coordinator system in 1986 and Orchestrator Mail in 2010 to facilitate conversations for action through commitments.
The Role Of Ontology In Modern Expert Systems Dallas 2008Jason Morris
The document discusses the role of ontologies in modern expert system development. It provides background on expert systems and ontologies, explaining that ontologies define domains of knowledge and are used to encapsulate domain knowledge for use in expert systems. The document outlines the process of developing ontologies, including identifying concepts and relationships in a domain. It also provides an example of an expert system called SINFERS that uses ontologies to select soil property prediction models.
THEORY & REVIEWTHEORIZING THE DIGITAL OBJECT1Philip Fa.docxsusannr
THEORY & REVIEW
THEORIZING THE DIGITAL OBJECT1
Philip Faulkner
Clare College, University of Cambridge,
Cambridge, CB2 1TL, UNITED KINGDOM {[email protected]}
Jochen Runde
Cambridge Judge Business School and Girton College, University of Cambridge,
Cambridge, CB2 1AG, UNITED KINGDOM {[email protected]}
Prompted by perceived shortcomings of prevailing conceptualizations of digital technology in IS, we propose
a theory aimed at capturing both the ontological complexity of digital objects qua objects, and how their iden-
tity and use is bound up with various social associations. We begin with what it is to be an object, the dif-
ferences between material and nonmaterial objects, and various categories of nonmaterial objects including
syntactic objects and bitstrings. Building on these categories we develop a conception of digital objects and
a novel “bearer” theory of how material and nonmaterial objects combine. The role of computation is con-
sidered, and how the identity and system functions of digital objects flow from their social positioning in the
communities in which they arise. Various implications of the theory are identified, focusing on its use as a
conceptual frame through which to view digital phenomena, and its potential to inform existing perspectives
with regard both to how digital technology per se and the relationship between people and digital technology
should be theorized. These implications are illustrated with reference to secondary markets for software, the
treatment of digital resources in the resource-based, knowledge-based, and service-dominant logic views of
organizing, and recent work on sociomateriality.
Keywords: Nonmaterial objects, digital objects, bitstrings, digital technology, social positions, resources,
resource-based view, service-dominant logic, sociomateriality, imbrication
Introduction 1
One of the striking features of the digital revolution has been
the proliferation of what we will call digital objects, many of
which have transformed and become indispensable parts of
organizational life. Digital objects feature prominently in IS
research and include computer systems and peripherals (Hib-
beln et al. 2017; Xu et al. 2017), smart devices (Prasopoulou
2017; Yoo 2010), mobile apps (Boudreau 2012; Claussen et
al. 2013; Hoehle and Venkatesh 2015), emails (Barley et al.
2011; Wang et al. 2016), blogs (Aggarwal et al. 2012; Chau
and Xu 2012; Luo et al. 2017), electronic health records
(Kohli and Tan 2016), online videos (Kallinikos and Mariá-
tegui, 2011; Susarla et al. 2012), 3D printers (Kyriakou et al.
2017), and enterprise systems (Strong and Volkoff 2010;
Sykes 2015).
Illuminating as these and similar studies invariably are,
however, their principal focus is on the human and organi-
zational implications of the technology in question rather than
on the devices themselves. The result is that research of this
kind tends to invoke “pretheoretical understandings” (Ekbia
2009, p. 2555) o.
The document discusses developing ontologies, including:
1. What an ontology is and different types of ontologies such as taxonomies, thesauri, and reference models.
2. Representing ontologies using knowledge representation formalisms that have evolved from semantic networks to description logics.
3. The Semantic Web ontology language OWL, which extends RDFS and is divided into three species with different levels of expressivity.
11.vol 0003www.iiste.org call for paper no 2 pp 143-159Alexander Decker
This document summarizes an article that examines the governance of humanitarian projects in Southeast India after the 2004 tsunami. It discusses how notions of transparency and accountability are ambiguous when applied to complex cross-cultural projects implemented at a local level. The article uses insights from Actor Network Theory to analyze how "translation effects" and minor inconsistencies can paradoxically help institutionalize projects and amplify their impact, despite not being captured by formal management controls. It explores how ANT concepts like "front lines" and "computing centers" can provide understanding of surprises experienced in such projects.
The document discusses what a knowledge representation is. It argues that a knowledge representation plays five distinct roles:
1. It acts as a surrogate for real-world entities, allowing reasoning to be done internally rather than through direct interaction. All representations are imperfect surrogates.
2. It embodies a set of ontological commitments about how to conceptualize the world. Selecting a representation means deciding what aspects to focus on and what to ignore.
3. It provides a fragmentary theory of intelligent reasoning, specifying what inferences are sanctioned and recommended.
4. It serves as a pragmatic and efficient computational environment for thinking.
5. It acts as a medium for human expression, a language
These three interdisciplinary creative projects show how value is created:
1) Sensory Threads explored biosensors and soundscapes through playful experimentation.
2) Gesture workshops captured motion data to connect technology and art.
3) MUPPITS is developing a new business model for film post-production using a systemic approach.
Ongoing research examines identifying, articulating, and stabilizing emergent business models and anticipating value creation through constant novelty and experimentation. Theories of emergence may help understand innovation and value.
Fiorella De Cindio, What after protests? Design issues and software tools tow...dmcolab
The document discusses issues around designing digital tools to support deliberative democracy after protests. It notes that both grassroots movements and public institutions often mistakenly use existing platforms like Facebook and Twitter rather than designing purpose-built tools. Well-designed tools are needed to help shaped shaped by citizen movements and engage citizens in a sustainable way. The empirical nature of informatics involves studying real-world phenomena, explaining them through models and theories, and validating those models through experiments and case studies in real-life settings.
The document discusses theories of objects from STS scholars like Heidegger, Ihde, and Latour, and how they can inform our understanding of objects in an Internet of Things world. It explores how STS theories view the composition, position, and relations of objects. The researchers are working on a model and application to implement objects in IoT, and are drawing on lessons from these theorists about how technologies mediate our relations with objects and environments. They are still working through questions about how users can define, share, and search for IoT objects.
This document discusses the state-of-the-art of Internet of Things (IoT) ontologies. It begins by defining ontology and describing important design criteria for ontologies including clarity, coherence, extendibility, and minimal encoding bias. It then discusses the challenges of IoT, including large scale networks, deep heterogeneity, and unknown topology. Several existing IoT ontologies are described, including SWAMO, MMI Device Ontology, and SSN. The document concludes that while no single global IoT ontology currently exists, ontologies are needed to address the semantic interoperability challenges of heterogeneous IoT devices and domains.
Similar to PR-OWL 2.0 - Bridging the gap to OWL semantics (20)
Ouvidoria de Balcão vs Ouvidoria Digital: Desafios na Era Big DataRommel Carvalho
Apresentação realizada no dia 14/03/2017 por Rommel N. Carvalho na Semana de Ouvidoria e Acesso à Informação de 2017, organizada pela CGU.
YouTube: https://youtu.be/vNMtULu5X1c?t=3h20m24s
Como transformar servidores em cientistas de dados e diminuir a distância ent...Rommel Carvalho
Palestra ministrada pelo Dr. Rommel Novaes Carvalho, Coordenador-Geral do Observatório da Despesa Pública e Professor do Mestrado Profissional em Computação Aplicada da UnB.
Evento: Brasil 100% Digital: Integração e transparência a serviço da sociedade
Website: http://www.brasildigital.gov.br/
Data: 10/11/2016
Vídeo: https://www.youtube.com/watch?v=3WYQlPR-RLw&feature=youtu.be&t=2h4m44s
Proposta de Modelo de Classificação de Riscos de Contratos PúblicosRommel Carvalho
O documento propõe três modelos para avaliar o risco de contratos públicos: 1) um modelo de aprendizagem supervisionada para classificar o risco de fornecedores com base em variáveis como doações políticas e histórico de punições; 2) um segundo modelo para classificar o risco de contratos com base em aspectos como competitividade e complexidade; 3) um modelo multicritério para selecionar casos de auditoria com base no risco do contrato, risco da empresa, e questões logísticas.
Categorização de achados em auditorias de TI com modelos supervisionados e nã...Rommel Carvalho
Palestra ministrada pela Patrícia Maia no 2o Seminário sobre Análise de Dados na Administração Pública @ http://www.brasildigital.gov.br/
Resumo: O trabalho consistiu na aplicação de técnicas de mineração de textos para identificação dos principais assuntos abordados nas auditorias dos últimos cinco anos. Foram utilizadas duas abordagens: a abordagem supervisionada aplicando classificação de textos com o algoritmo Random Forest e a abordagem não supervisionada através da técnica de modelagem de tópicos Latent Dirichlet Allocation (LDA). O projeto piloto foi validado com as constatações de TI e está agora sendo estendido a constatações relacionadas a outros temas. O objetivo é permitir catalogar o histórico de constatações emitidas e categorizar automaticamente novos registros. Com isso, os servidores poderão recuperar situações semelhantes para aplicação em novos trabalhos ou, ainda, tratar problemas recorrentes de forma estruturante. Além disso a mesma lógica pode ser usada para gerar conhecimento a partir de outros tipos de texto: pedidos com base na Lei de Acesso à Informações, manifestações do e-OUV, processos analisados pela CRG, notícias de interesse do órgão, etc.
Palestrante: Patrícia Maia - Ministério da Transparência, Fiscalização e Controle
Currículo: Possui mestrado em Computação Aplicada pela Universidade de Brasília (UNB), especialização em Modelagem de Processos e Engenharia de Requisitos pela Universidade Federal do Rio Grande do SUL (UFRGS) e graduação em Tecnologia da Informação. Tem experiência profissional nas áreas de mineração de textos, ETL, banco de dados e controle governamental. Trabalha atualmente no Ministério da Transparência, Fiscalização e Controle (MTFC), exercendo suas atividades na Diretoria de Pesquisas e Informações Estratégicas.
Mapeamento de risco de corrupção na administração pública federalRommel Carvalho
O documento descreve um projeto do governo brasileiro para mapear o risco de corrupção na administração pública federal através da análise e mineração de dados sobre servidores públicos e unidades governamentais. O projeto usa técnicas avançadas de aprendizado de máquina e análise estatística de grandes conjuntos de dados para gerar indicadores confiáveis de risco de corrupção. O objetivo final é fornecer uma ferramenta estratégica para prevenir e combater a corrupção de forma proativa.
1) O Observatório da Despesa Pública utiliza técnicas de ciência de dados para identificar riscos de fraude e irregularidades nos gastos públicos e apoiar a tomada de decisão dos gestores públicos.
2) Projetos como o Mapa de Risco de Fornecedores, a Análise Preventiva de Contratações e a Triagem Automática de Denúncias usam análises preditivas para prevenir situações de risco.
3) O Banco de Preços da APF permite pesquisas de mercado e identificação de sobrepreços nos contratos
Aplicação de técnicas de mineração de textos para classificação automática de...Rommel Carvalho
O uso de classificação automática de textos tem se tornado cada vez mais comum nos últimos anos. Contudo, ao se trabalhar com classificação em larga escala, a complexidade aumenta consideravelmente. Foi realizado um estudo de caso, aplicado à triagem de denúncias na Controladoria Geral da União, utilizando uma grande quantidade de categorias a serem classificadas. A solução proposta empregou aprendizagem de máquina e classificação multilabel. Essas técnicas tiveram como objetivo a construção de um modelo capaz de solucionar adversidades inerentes a este contexto, apresentando ganhos significativos
Patrícia Helena Maia Alves de Andrade - Controladoria-Geral da União
Analista de Finanças e Controle da CGU, atuando na área de mineração de textos e análise de dados, na Diretoria de Pesquisa e Informações Estratégicas. Atualmente está finalizando o Mestrado Profissional em Computação Aplicada na Universidade de Brasília
Filiação partidária e risco de corrupção de servidores públicos federaisRommel Carvalho
O documento discute o uso de aprendizado de máquina para analisar a relação entre filiação partidária e risco de corrupção entre servidores públicos federais brasileiros. Os dados mostraram uma correlação positiva entre filiação partidária e casos de corrupção. Um modelo de floresta aleatória obteve os melhores resultados, identificando variáveis-chave como tempo de filiação e motivo de cancelamento.
Uso de mineração de dados e textos para cálculo de preços de referência em co...Rommel Carvalho
Uma das grandes responsabilidades da CGU é identificar as compras do governo com valores diferentes dos praticados pelo mercado. Dessa forma, é possível mensurar o grau de eficiência das compras realizadas pelos órgãos governamentais. Essa informação é útil tanto para o auditor, que é responsável por fiscalizar o uso dos recursos públicos, como para o gestor, que pode melhorar seus processos observando as melhores práticas de outras unidades do governo. Dada a enorme quantidade e a diversidade das compras realizadas pelo Governo, essa análise se torna praticamente inviável sem a ajuda de algum mecanismo automatizado. No entanto, para que essa análise automatizada seja possível, é preciso ter antes de tudo, uma base de dados com os preços médios, ou de referência, para cada produto que se deseja analisar. Apesar de todas as compras do Governo Federal serem inseridas em um sistema único e centralizado, as informações armazenadas não são detalhadas e estruturadas o suficiente para se calcular esses preços de referência.
Essa palestra apresenta a metodologia desenvolvida na CGU, baseada em técnicas de mineração de dados, para extrair as informações necessárias desse sistema centralizado de forma a possibilitar o cálculo de preços de referência para produtos comprados pelo Governo Federal. Além disso, são apresentadas também algumas análises feitas com base no banco de preços criado a partir dessa metodologia de forma a enfatizar sua importância para a melhoria da gestão dos recursos públicos.
Rommel Novaes Carvalho - Controladoria-Geral da União
Coordenador-Geral do Observatório da Despesa Pública da CGU (http://www.cgu.gov.br/assuntos/informacoes-estrategicas/observatorio-da-despesa-publica), realizou seu PhD e Pós-Doc na George Mason University, EUA, na área de Inteligência Artificial, Web Semântica e Mineração de Dados e também é professor do Mestrado Profissional em Computação Aplicada da UnB
Detecção preventiva de fracionamento de comprasRommel Carvalho
O documento descreve um estudo sobre a detecção preventiva de fracionamento de compras no Brasil usando redes bayesianas. O estudo utilizou dados de compras do governo para criar um modelo capaz de identificar possíveis fracionamentos. Após a preparação dos dados, diferentes algoritmos de modelagem foram testados e avaliados, resultando em um modelo com alta acurácia e capacidade de classificação. O modelo foi implantado para alertar sobre possíveis fracionamentos em novas compras governamentais.
Identificação automática de tipos de pedidos mais frequentes da LAIRommel Carvalho
O documento descreve um método para identificar automaticamente os tipos de pedidos mais frequentes na Lei de Acesso à Informação (LAI) brasileira através da análise de tópicos em mais de 300 mil pedidos usando o modelo Latent Dirichlet Allocation (LDA). O método identificou vários tópicos comuns, incluindo pedidos sobre o Banco Central do Brasil (BACEN) e sobre concursos públicos. O processo levou cerca de 10 horas para analisar os 300 mil pedidos.
BMAW 2014 - Using Bayesian Networks to Identify and Prevent Split Purchases i...Rommel Carvalho
Presentation given by Rommel N. Carvalho at the 11th Bayesian Modeling Applications Workshop (BMAW 2014) at the 30th Conference on Uncertainty in Artificial Intelligence (UAI 2014) in July 27, 2014, Quebec City, Quebec, Canada. This was a joint work between the Research and Strategic Information Directorate from Brazil's Office of the Comptroller General and the Department of Computer Science from the University of Brasília.
Talk: https://www.youtube.com/watch?v=UVOsztdSQ3A
Paper: http://seor.gmu.edu/~klaskey/BMAW2014/BMAW2014_papers/bmaw2014_paper_6.pdf
Title: Using Bayesian Networks to Identify and Prevent Split Purchases in Brazil.
Abstract: To cope with society's demand for transparency and corruption prevention, the Brazilian Office of the Comptroller General (CGU) has carried out a number of actions, including: awareness campaigns aimed at the private sector; campaigns to educate the public; research initiatives; and regular inspections and audits of municipalities and states. Although CGU has collected information from various different sources - Revenue Agency, Federal Police, and others -, going through all the data in order to find suspicious transactions has proven to be really challenging. In this paper, we present a Data Mining study applied on real data - government purchases - for finding transactions that might become irregular before they are considered as such in order to act proactively. Moreover, we compare the performance of various Bayesian Network (BN) learning algorithms with different parameters in order to fine tune the learned models and improve their performance. The best result was obtained using the Tree Augmented Network (TAN) algorithm and oversampling the minority class in order to balance the data set. Using a 10-fold cross-validation, the model correctly classified all split purchases, it obtained a ROC area of .999, and its accuracy was 99.197%.
Integração do Portal da Copa @ Comissão CMA do Senado FederalRommel Carvalho
Apresentação preparada por Rommel N. Carvalho e apresentada pela Diretora de Sistemas e Informações da Controladoria-Geral da União (CGU), Tatiana Z. Panisset, na reunião da Comissão de Meio Ambiente, Defesa do Consumidor e Fiscalização e Controle (CMA) do Senado Federal (SF). A reunião teve como foco o debate da unificação da entrada de dados dos Portais de Transparência da Copa de 2014 do SF (www.copatransparente.gov.br) e da CGU (http://transparencia.gov.br/copa2014). Mais informações sobre a reunião em http://goo.gl/KCBD6.
As alternativas apresentadas foram discutidas e deliberadas pela CMA com aprovação da colaboração oficial entre o poder Legislativo e o poder Executivo para executar a integração da entrada de dados dos respectivos portais da copa do mundo. Notícias sobre essa colaboração podem ser encontradas em goo.gl/N8cbr, goo.gl/RVMGd, goo.gl/Ze3uJ, goo.gl/6o7BZ e goo.gl/C1CFv.
Título:
O que é e como usar dados abertos governamentais
Resumo:
A Web Semântica visa associar os dados disponibilizados na Web aos seus significados de forma a possibilitar que esses dados sejam compreensíveis tanto por humanos quanto por máquinas. Isso permitirá que tarefas, antes realizadas apenas por humanos, possam agora ser delegadas a máquinas. Técnicas de Web Semântica têm se difundido com o significativo aumento no número de aplicações que fazem uso de ontologias e semântica através de tecnologias como RDF, OWL, dentre outras, e as várias iniciativas espalhadas pelo mundo referente à disponibilização de dados abertos, em especial, de dados abertos governamentais. Dados abertos governamentais são definidos pela W3C – Consórcio da Web, como “a publicação e disseminação na Web de dados gerados pelo Setor Público, compartilhados em formato bruto e aberto, compreensíveis logicamente, de modo a permitir sua reutilização em aplicações digitais desenvolvidas pela sociedade”. O objetivo dessa palestra é apresentar os principais conceitos que norteiam as diversas iniciativas de dados abertos governamentais, a situação atual dessa iniciativa no Brasil, os benefícios que essa iniciativa traz para a sociedade como o uso desses dados abertos para contribuir com a melhoria e transparência da gestão pública.
Palestrante:
Dr. Rommel Novaes Carvalho, Ph.D
Postdoctoral Research Associate – C4I Center @ GMU
Analista de Finanças e Controle – CGU
http://mason.gmu.edu/~rcarvalh
CV resumido:
Rommel Novaes Carvalho é bacharel em Ciência da Computação e Mestre em Informática pela Universidade de Brasília, e doutor em Engenharia de Sistemas e Pesquisa Operacional pela Universidade George Mason, Estados Unidos. Pesquisador em Inteligência Artificial (IA) e membro do Grupo de Pesquisa em Inteligência Artificial da Universidade de Brasília (GIA). Suas áreas de interesse abrangem representação e raciocínio com incerteza na Web Semântica usando inferência bayesiana, mineração de dados, e engenharia de software. Desenvolvedor Java certificado, com experiência em implementação de sistemas de redes probabilísticas, sendo o arquiteto principal do projeto UnBBayes – Framework para raciocino probabilístico, em desenvolvimento pelo GIA desde 2000. Em seu doutorado propôs e implementou a versão 2 para o PR-OWL – Probabilistic OWL, para permitir o reuso de ontologias determinísticas existentes, sua interoperabilidade com ontologias probabilísticas representadas em PR-OWL, e raciocínio misto ontológico e probabilístico. Desde 2005 trabalha na Controladoria-Geral da União como especialista em Tecnologia da Informação. Em 2011, tornou-se pesquisador associado de Pós-Doutorado na George Mason University.
Probabilistic Ontology: Representation and Modeling MethodologyRommel Carvalho
Oral Defense of Doctoral Dissertation
Volgenau School of Engineering, George Mason University
Rommel Novaes Carvalho
Bachelor of Science, University of Brasília, Brazil, 2003
Master of Science, University of Brasília, Brazil, 2008
Probabilistic Ontology: Representation and Modeling Methodology
Tuesday, June 28, 2011, 2:00pm -- 4:00pm
Nguyen Engineering Building, Room 4705
Committee
Kathryn Laskey, Chair
Paulo Costa
Kuo-Chu Chang
David Schum
Larry Kerschberg
Fabio Cozman
Abstract
The past few years have witnessed an increasingly mature body of research on the Semantic Web (SW), with new standards being developed and more complex problems being addressed. As complexity increases in SW applications, so does the need for principled means to cope with uncertainty in SW applications. Several approaches addressing uncertainty representation and reasoning in the SW have emerged. Among these is Probabilistic Web Ontology Language (PR-OWL), which provides Web Ontology Language (OWL) constructs for representing Multi-Entity Bayesian Network (MEBN) theories. However, there are several important ways in which the initial version PR-OWL 1.0 fails to achieve full compatibility with OWL. Furthermore, although there is an emerging literature on ontology engineering, little guidance is available on the construction of probabilistic ontologies.
This research proposes a new syntax and semantics, defined as PR-OWL 2.0, which improves compatibility between PR-OWL and OWL in two important respects. First, PR-OWL 2.0 follows the approach suggested by Poole et al. to formalizing the association between random variables from probabilistic theories with the individuals, classes and properties from ontological languages such as OWL. Second, PR-OWL 2.0 allows values of random variables to range over OWL datatypes.
To address the lack of support for probabilistic ontology engineering, this research describes a new methodology for modeling probabilistic ontologies called Uncertainty Modeling Process for Semantic Technologies (UMP-ST). To better explain the methodology and to verify that it can be applied to different scenarios, this dissertation presents step-by-step constructions of two different probabilistic ontologies. One is used for identifying frauds in public procurements in Brazil and the other is used for identifying terrorist threats in the maritime domain. Both use cases demonstrate the advantages of PR-OWL 2.0 over its predecessor.
SWRL-F - A Fuzzy Logic Extension of the Semantic Web Rule LanguageRommel Carvalho
Presentation given by Tomasz Wlodarczyk at the 6th Uncertainty Reasoning for the Semantic Web Workshop at the 9th International Semantic Web Conference in 2010.
Paper: SWRL-F - A Fuzzy Logic Extension of the Semantic Web Rule Language
Abstract: Enhancing Semantic Web technologies with an ability to express uncertainty and imprecision is widely discussed topic. While SWRL can provide additional expressivity to OWL-based ontologies, it does not provide any way to handle uncertainty or imprecision. We introduce an extension of SWRL called SWRL-F that is based on SWRL rule language and uses SWRL's strong semantic foundation as its formal underpinning. We extend it with a SWRL-F ontology to enable fuzzy reasoning in the rule base. The resulting language provides small but powerful set of fuzzy operations that do not introduce inconsistencies in the host ontology.
UnBBayes-PRM - On Implementing Probabilistic Relational ModelsRommel Carvalho
UnBBayes is a probabilistic network framework written in Java. It has both a GUI and an API with inference, sampling, learning and evaluation. It supports BN, ID, MSBN, OOBN, HBN, MEBN/PR-OWL, PRM, structure, parameter and incremental learning.
This presentation talks about UnBBayes-PRM, a plugin for UnBBayes that has a simple implementation of Probabilistic Relational Models.
This presentation was given by Shou Matsumoto from the University of Brasilia in Brazil via web conference to PhD students at George Mason University in the US on the Friday seminar called Krypton (http://krypton.c4i.gmu.edu/) in October 29, 2010.
URSW 2009 - Probabilistic Ontology and Knowledge Fusion for Procurement Fraud...Rommel Carvalho
Presentation given by Rommel Carvalho at the 5th Uncertainty Reasoning for the Semantic Web Workshop at the 8th International Semantic Web Conference in 2009.
Paper: Probabilistic Ontology and Knowledge Fusion for Procurement Fraud Detection in Brazil
Abstract: To cope with society’s demand for transparency and corruption prevention, the Brazilian Federal General Comptroller (CGU) has carried out a number of actions, including: awareness campaigns aimed at the private sector; campaigns to educate the public; research initiatives; and regular inspections and audits of municipalities and states. Although CGU has collected information from hundreds of different sources - Revenue Agency, Federal Police, and others - the process of fusing all this data has not been efficient enough to meet the needs of CGU’s decision makers. Therefore, it is natural to change the focus from data fusion to knowledge fusion. As a consequence, traditional syntactic methods must be augmented with techniques that represent and reason with the semantics of databases. However, commonly used approaches fail to deal with uncertainty, a dominant characteristic in corruption prevention. This paper presents the use of Probabilistic OWL (PR-OWL) to design and test a model that performs information fusion to detect possible frauds in procurements involving Federal money. To design this model, a recently developed tool for creating PR-OWL ontologies was used with support from PR-OWL specialists and careful guidance from a fraud detection specialist from CGU.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
PR-OWL 2.0 - Bridging the gap to OWL semantics
1. PR-OWL 2.0 - Bridging the gap
to OWL semantics
Rommel Carvalho, Kathryn Laskey, and Paulo Costa
Center of Excellence in C4I, George Mason University, USA
Sixth International Workshop on Uncertainty Reasoning
for the Semantic Web (URSW 2010)
11/07/2010
Sunday, December 19, 2010
7. Ontology
An ontology is an explicit, formal knowledge representation that
expresses knowledge about a domain of application. This includes:
Types of entities that exist in the domain;
Properties of those entities;
Relationships among entities;
Processes and events that happen with those entities;
where the term entity refers to any concept (real or fictitious, concrete or abstract) that
can be described and reasoned about within the domain of application. [3]
Introduction - PR-OWL 2.0 - Conclusion 4
Sunday, December 19, 2010
8. Ontology
An ontology is an explicit, formal knowledge representation that
expresses knowledge about a domain of application. This includes:
Types of entities that exist in the domain; Person, Procurement, Enterprise, ...
Properties of those entities;
Relationships among entities;
Processes and events that happen with those entities;
where the term entity refers to any concept (real or fictitious, concrete or abstract) that
can be described and reasoned about within the domain of application. [3]
Introduction - PR-OWL 2.0 - Conclusion 4
Sunday, December 19, 2010
9. Ontology
An ontology is an explicit, formal knowledge representation that
expresses knowledge about a domain of application. This includes:
Types of entities that exist in the domain; Person, Procurement, Enterprise, ...
Properties of those entities; firstName, lastName, ...
Relationships among entities;
Processes and events that happen with those entities;
where the term entity refers to any concept (real or fictitious, concrete or abstract) that
can be described and reasoned about within the domain of application. [3]
Introduction - PR-OWL 2.0 - Conclusion 4
Sunday, December 19, 2010
10. Ontology
An ontology is an explicit, formal knowledge representation that
expresses knowledge about a domain of application. This includes:
Types of entities that exist in the domain; Person, Procurement, Enterprise, ...
Properties of those entities; firstName, lastName, ...
Relationships among entities; motherOf, ownerOf, isFrontOf ...
Processes and events that happen with those entities;
where the term entity refers to any concept (real or fictitious, concrete or abstract) that
can be described and reasoned about within the domain of application. [3]
Introduction - PR-OWL 2.0 - Conclusion 4
Sunday, December 19, 2010
11. Ontology
An ontology is an explicit, formal knowledge representation that
expresses knowledge about a domain of application. This includes:
Types of entities that exist in the domain; Person, Procurement, Enterprise, ...
Properties of those entities; firstName, lastName, ...
Relationships among entities; motherOf, ownerOf, isFrontOf ...
analyzing if requirements
Processes and events that happen with those entities; are met,
choosing better proposal, ...
where the term entity refers to any concept (real or fictitious, concrete or abstract) that
can be described and reasoned about within the domain of application. [3]
Introduction - PR-OWL 2.0 - Conclusion 4
Sunday, December 19, 2010
12. Probabilistic Ontology
A probabilistic ontology is an explicit, formal knowledge representation
that expresses knowledge about a domain of application. This includes:
Types of entities that exist in the domain; Person, Procurement, Enterprise, ...
Properties of those entities; firstName, lastName, ...
Relationships among entities; motherOf, ownerOf, isFrontOf ...
analyzing if requirements
Processes and events that happen with those entities; are met,
choosing better proposal, ...
Statistical regularities that characterize the domain;
Inconclusive, ambiguous, incomplete, unreliable, and dissonant knowledge related to entities
of the domain;
Uncertainty about all the above forms of knowledge;
where the term entity refers to any concept (real or fictitious, concrete or abstract) that
can be described and reasoned about within the domain of application. [3]
Introduction - PR-OWL 2.0 - Conclusion 5
Sunday, December 19, 2010
13. Probabilistic Ontology
A probabilistic ontology is an explicit, formal knowledge representation
that expresses knowledge about a domain of application. This includes:
Types of entities that exist in the domain; Person, Procurement, Enterprise, ...
Properties of those entities; firstName, lastName, ...
Relationships among entities; motherOf, ownerOf, isFrontOf ...
analyzing if requirements
Processes and events that happen with those entities; are met,
choosing better proposal, ...
Statistical regularities that characterize the domain;
Inconclusive, ambiguous, incomplete, unreliable, and dissonant knowledge related to entities
of the domain;
Uncertainty about all the above forms of knowledge;
where the term entity refers to any concept (real or fictitious, concrete or abstract) that
can be described and reasoned about within the domain of application. [3]
Introduction - PR-OWL 2.0 - Conclusion 5
Sunday, December 19, 2010
14. Probabilistic Ontology
A probabilistic ontology is an explicit, formal knowledge representation
that expresses knowledge about a domain of application. This includes:
Types of entities that exist in the domain; Person, Procurement, Enterprise, ...
Properties of those entities; firstName, lastName, ...
Relationships among entities; motherOf, ownerOf, isFrontOf ...
analyzing if requirements
Processes and events that happen with those entities; are met,
choosing better proposal, ...
Statistical regularities that characterize the domain;
Inconclusive, ambiguous, incomplete, unreliable, and dissonant knowledge related to entities
of the domain;
Uncertainty about all the above forms of knowledge;
where the term entity refers to any concept (real or fictitious, concrete or abstract) that
can be described and reasoned about within the domain of application. [3]
Introduction - PR-OWL 2.0 - Conclusion 5
Sunday, December 19, 2010
15. Probabilistic Ontology
A probabilistic ontology is an explicit, formal knowledge representation
that expresses knowledge about a domain of application. This includes:
Types of entities that exist in the domain; Person, Procurement, Enterprise, ...
Properties of those entities; firstName, lastName, ...
Relationships among entities; motherOf, ownerOf, isFrontOf ...
analyzing if requirements
Processes and events that happen with those entities; are met,
choosing better proposal, ...
Statistical regularities that characterize the domain;
Inconclusive, ambiguous, incomplete, unreliable, and dissonant knowledge related to entities
of the domain; P(isFrontOf|
valueOfProcurement = >1M,
Uncertainty about all the above forms of knowledge; annualIncome = <10k) = 90%
where the term entity refers to any concept (real or fictitious, concrete or abstract) that
can be described and reasoned about within the domain of application. [3]
Introduction - PR-OWL 2.0 - Conclusion 5
Sunday, December 19, 2010
59. Conclusion
Provided both the syntax and a more in depth
description of one of the major changes in PR-
OWL 2.0
Introduction - PR-OWL 2.0 - Conclusion 16
Sunday, December 19, 2010
60. Conclusion
Provided both the syntax and a more in depth
description of one of the major changes in PR-
OWL 2.0
A formal mapping between OWL concepts and PR-OWL
random variables
Introduction - PR-OWL 2.0 - Conclusion 16
Sunday, December 19, 2010
61. Conclusion
Provided both the syntax and a more in depth
description of one of the major changes in PR-
OWL 2.0
A formal mapping between OWL concepts and PR-OWL
random variables
Justified the importance of a formal mapping through an example
Introduction - PR-OWL 2.0 - Conclusion 16
Sunday, December 19, 2010
62. Conclusion
Provided both the syntax and a more in depth
description of one of the major changes in PR-
OWL 2.0
A formal mapping between OWL concepts and PR-OWL
random variables
Justified the importance of a formal mapping through an example
Presented a simple solution sufficient for binary relations
Introduction - PR-OWL 2.0 - Conclusion 16
Sunday, December 19, 2010
63. Conclusion
Provided both the syntax and a more in depth
description of one of the major changes in PR-
OWL 2.0
A formal mapping between OWL concepts and PR-OWL
random variables
Justified the importance of a formal mapping through an example
Presented a simple solution sufficient for binary relations
Presented a complex and robust solution for n-ary relations
Introduction - PR-OWL 2.0 - Conclusion 16
Sunday, December 19, 2010
64. Conclusion
Provided both the syntax and a more in depth
description of one of the major changes in PR-
OWL 2.0
A formal mapping between OWL concepts and PR-OWL
random variables
Justified the importance of a formal mapping through an example
Presented a simple solution sufficient for binary relations
Presented a complex and robust solution for n-ary relations
Presented a schematic for how to do the mapping back and forth
between PR-OWL random variables and OWL triples (both
predicates and functions)
Introduction - PR-OWL 2.0 - Conclusion 16
Sunday, December 19, 2010
65. Future work
Introduction - PR-OWL 2.0 - Conclusion 17
Sunday, December 19, 2010
66. Future work
Formally define the semantics of the schematic
proposed
Introduction - PR-OWL 2.0 - Conclusion 17
Sunday, December 19, 2010
67. Future work
Formally define the semantics of the schematic
proposed
Propose an algorithm for performing the mapping
from OWL concepts to PR-OWL RVs, and vice-
versa
Introduction - PR-OWL 2.0 - Conclusion 17
Sunday, December 19, 2010
68. Future work
Formally define the semantics of the schematic
proposed
Propose an algorithm for performing the mapping
from OWL concepts to PR-OWL RVs, and vice-
versa
In addition, PR-OWL 2 will address other issues
described in [2]
Replace the meta-entity definition in PR-OWL
Use of existing types in OWL, RDF(S), and XML as
possible values for RVs (including data types)
Introduction - PR-OWL 2.0 - Conclusion 17
Sunday, December 19, 2010