The document introduces the Distributed Ontology Language (DOL), which is part of the Ontology Integration and Interoperability (OntoIOP) standard. DOL aims to enable logical and modular heterogeneity across ontologies to improve semantic integration and interoperability. It will serve as a logic-agnostic meta-language for structuring ontologies, ontology modules, and formal and informal links between ontologies. DOL is intended to have well-defined semantics and serializations to XML, RDF, and text to facilitate reuse of existing ontologies and reasoning over heterogeneous ontological representations.
One of the common and easier techniques of feature extraction is Mel Frequency Cestrum Coefficient (MFCC) which allows the signals to extract the feature vector. It is used by Dynamic Feature Extraction and provide high performance rate when compared to previous technique like LPC. But one of the major drawbacks in this technique is robustness. Another feature extraction technique is Relative Spectral (RASTA). In effect the RASTA filter band passes each feature coefficient and in both the log spectral and the Spectral domains appear linear channel distortions as an additive constant. The high-pass portions of the equivalent band pass filter effect the convolution noise introduced in the channel. The low-pass filtering helps in smoothing frame to frame spectral changes. Compared to MFCC feature extraction technique, RASTA filtering reduces the impact of the noise in signals and provides high robustness
One of the common and easier techniques of feature extraction is Mel Frequency Cestrum Coefficient (MFCC) which allows the signals to extract the feature vector. It is used by Dynamic Feature Extraction and provide high performance rate when compared to previous technique like LPC. But one of the major drawbacks in this technique is robustness. Another feature extraction technique is Relative Spectral (RASTA). In effect the RASTA filter band passes each feature coefficient and in both the log spectral and the Spectral domains appear linear channel distortions as an additive constant. The high-pass portions of the equivalent band pass filter effect the convolution noise introduced in the channel. The low-pass filtering helps in smoothing frame to frame spectral changes. Compared to MFCC feature extraction technique, RASTA filtering reduces the impact of the noise in signals and provides high robustness
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...kevig
Many Natural Language Processing (NLP) applications involve Named Entity Recognition (NER) as an important task, where it leads to improve the overall performance of NLP applications. In this paper the Deep learning techniques are used to perform NER task on Hindi text data as it found that as compared to English NER, Hindi language NER is not sufficiently done. This is a barrier for resource-scarce languages as many resources are not readily available. Many researchers use various techniques such as rule based, machine learning based and hybrid approaches to solve this problem. Deep learning based algorithms are being developed in large scale as an innovative approach now a days for the advanced NER models which will give the best results out of it. In this paper we devise a Novel architecture based on residual network architecture for preferably Bidirectional Long Short Term Memory (BiLSTM) with fasttext word embedding layers. For this purpose we use pre-trained word embedding to represent the words in the corpus where the NER tags of the words are defined as the used annotated corpora. BiLSTM Development of an NER system for Indian languages is a comparatively difficult task. In this paper, we have done the various experiments to compare the results of NER with normal embedding and fasttext embedding layers to analyse the performance of word embedding with different batch sizes to train the deep learning models. Here we present a state-of-the-art results with said approach F1 Score measures.
Imran Sarwar Bajwa, [2010], "Markov Logics Based Automated Business Requirements Analysis", in International Journal of Computer and Electrical Engineering (IJCEE) 2(3) pp:481-485, June 2010
"Object Oriented Programming in Python" presentation for the "Dynamic Programming Languages" subject by Juan Manuel Gimeno Illa at University of Lleida
A plethora of programming languages have been and continue to be developed to keep pace with hardware advancements and the ever more demanding requirements of software development. <br> As these increasingly sophisticated languages need to be well understood by both programmers and implementors, precise specifications are increasingly required. Moreover, the safety and adequacy with respect to requirements of programs written in these languages needs to be tested, analyzed, and, if possible, proved. <br> This dissertation proposes a rigorous approach to define programming languages based on rewriting, which allows to easily design and test language extensions, and to specify and analyze safety and adequacy of program executions.
To this aim, this dissertation describes the K Framework, an executable semantic framework inspired from rewriting logic but specialized and optimized for programming languages.
The K Framework consists of three components: (1) a language definitional technique; (2) a specialized notation; and (3) a resource-sharing concurrent rewriting semantics. The language definitional technique is a rewriting technique built upon the lessons learned from capturing and studying existing operational semantics frameworks within rewriting logic, and upon attempts to combine their strengths while avoiding their limitations. The specialized notation makes the technical details of the technique transparent to the language designer, and enhances modularity, by allowing the designer to specify the minimal context needed for a semantic rule. Finally, the resource-sharing concurrent semantics relies on the particular form of the semantic rules to enhance concurrency, by allowing overlapping rule instances (e.g., two threads writing in different locations in the store, which overlap on the store entity) to apply concurrently as long as they only overlap on the parts they do not change.
The main contributions of the dissertation are:
(1) a uniform recasting of the major existing operational semantics techniques within rewriting logic;
(2) an overview description of the K Framework and how it can be used to define, extend and analyze programming languages;
(3) a semantics for K concurrent rewriting obtained through an embedding in graph rewriting; and
(4) a description of the K-Maude tool, a tool for defining programming languages using the K technique on top of the Maude rewriting language.
A MULTI-LAYER HYBRID TEXT STEGANOGRAPHY FOR SECRET COMMUNICATION USING WORD T...IJNSA Journal
This paper introduces a multi-layer hybrid text steganography approach by utilizing word tagging and recoloring. Existing approaches are planned to be either progressive in getting imperceptibility, or high hiding limit, or robustness. The proposed approach does not use the ordinary sequential inserting process and overcome issues of the current approaches by taking a careful of getting imperceptibility, high hiding limit, and robustness through its hybrid work by using a linguistic technique and a format-based technique. The linguistic technique is used to divide the cover text into embedding layers where each layer consists of a sequence of words that has a single part of speech detected by POS tagger, while the format-based technique is used to recolor the letters of a cover text with a near RGB color coding to embed 12 bits from the secret message in each letter which leads to high hidden capacity and blinds the embedding, moreover, the robustness is accomplished through a multi-layer embedding process, and the generated stego key significantly assists the security of the embedding messages and its size. The experimental results comparison shows that the purpose approach is better than currently developed approaches in providing an ideal balance between imperceptibility, high hiding limit, and robustness criteria.
Comparison and Analysis Of LDM and LMS for an Application of a SpeechCSCJournals
Most of the automatic speech recognition (ASR) systems are based on Guassian Mixtures model. The output of these models depends on subphone states. We often measure and transform the speech signal in another form to enhance our ability to communicate. Speech recognition is the conversion from acoustic waveform into written equivalent message information. The nature of speech recognition problem is heavily dependent upon the constraints placed on the speaker, speaking situation and message context. Various speech recognition systems are available. The system which detects the hidden conditions of speech is the best model. LMS is one of the simple algorithm used to reconstruct the speech and linear dynamic model is also used to recognize the speech in noisy atmosphere..This paper is analysis and comparison between the LDM and a simple LMS algorithm which can be used for speech recognition purpose.
OOP and Its Calculated Measures in Programming Interactivityiosrjce
This study examines the object oriented programming (OOP) and its calculated measures in
programming interactivity in Nigeria. It focused on the existing programming languages used by programmers
and examines the need for integrating programming interactivity with OOP. A survey was conducted to measure
interactivity amongst professionals using certain parameters like flexibility, interactivity, speed,
interoperability, scalability, dynamism, and solving real life problems. Data was gathered using questionnaire,
and analysis was carried out using frequency, percentage ratio, and mean in arriving at a more proactive stand.
The results revealed that the some of the parameters used are highly in support of the programming interactivity
with OOP.
Knowledge Multimedia Processes in Technology Enhanced LearningRalf Klamma
Presentation for the 1st International Workshop on Multimedia Technologies and Distant Learning at ACM Multimedia 2009
Ralf Klamma, Marc Spaniol, Matthias Jarke
Beijing, China, October 23, 2009
A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUA...kevig
Many Natural Language Processing (NLP) applications involve Named Entity Recognition (NER) as an important task, where it leads to improve the overall performance of NLP applications. In this paper the Deep learning techniques are used to perform NER task on Hindi text data as it found that as compared to English NER, Hindi language NER is not sufficiently done. This is a barrier for resource-scarce languages as many resources are not readily available. Many researchers use various techniques such as rule based, machine learning based and hybrid approaches to solve this problem. Deep learning based algorithms are being developed in large scale as an innovative approach now a days for the advanced NER models which will give the best results out of it. In this paper we devise a Novel architecture based on residual network architecture for preferably Bidirectional Long Short Term Memory (BiLSTM) with fasttext word embedding layers. For this purpose we use pre-trained word embedding to represent the words in the corpus where the NER tags of the words are defined as the used annotated corpora. BiLSTM Development of an NER system for Indian languages is a comparatively difficult task. In this paper, we have done the various experiments to compare the results of NER with normal embedding and fasttext embedding layers to analyse the performance of word embedding with different batch sizes to train the deep learning models. Here we present a state-of-the-art results with said approach F1 Score measures.
Imran Sarwar Bajwa, [2010], "Markov Logics Based Automated Business Requirements Analysis", in International Journal of Computer and Electrical Engineering (IJCEE) 2(3) pp:481-485, June 2010
"Object Oriented Programming in Python" presentation for the "Dynamic Programming Languages" subject by Juan Manuel Gimeno Illa at University of Lleida
A plethora of programming languages have been and continue to be developed to keep pace with hardware advancements and the ever more demanding requirements of software development. <br> As these increasingly sophisticated languages need to be well understood by both programmers and implementors, precise specifications are increasingly required. Moreover, the safety and adequacy with respect to requirements of programs written in these languages needs to be tested, analyzed, and, if possible, proved. <br> This dissertation proposes a rigorous approach to define programming languages based on rewriting, which allows to easily design and test language extensions, and to specify and analyze safety and adequacy of program executions.
To this aim, this dissertation describes the K Framework, an executable semantic framework inspired from rewriting logic but specialized and optimized for programming languages.
The K Framework consists of three components: (1) a language definitional technique; (2) a specialized notation; and (3) a resource-sharing concurrent rewriting semantics. The language definitional technique is a rewriting technique built upon the lessons learned from capturing and studying existing operational semantics frameworks within rewriting logic, and upon attempts to combine their strengths while avoiding their limitations. The specialized notation makes the technical details of the technique transparent to the language designer, and enhances modularity, by allowing the designer to specify the minimal context needed for a semantic rule. Finally, the resource-sharing concurrent semantics relies on the particular form of the semantic rules to enhance concurrency, by allowing overlapping rule instances (e.g., two threads writing in different locations in the store, which overlap on the store entity) to apply concurrently as long as they only overlap on the parts they do not change.
The main contributions of the dissertation are:
(1) a uniform recasting of the major existing operational semantics techniques within rewriting logic;
(2) an overview description of the K Framework and how it can be used to define, extend and analyze programming languages;
(3) a semantics for K concurrent rewriting obtained through an embedding in graph rewriting; and
(4) a description of the K-Maude tool, a tool for defining programming languages using the K technique on top of the Maude rewriting language.
A MULTI-LAYER HYBRID TEXT STEGANOGRAPHY FOR SECRET COMMUNICATION USING WORD T...IJNSA Journal
This paper introduces a multi-layer hybrid text steganography approach by utilizing word tagging and recoloring. Existing approaches are planned to be either progressive in getting imperceptibility, or high hiding limit, or robustness. The proposed approach does not use the ordinary sequential inserting process and overcome issues of the current approaches by taking a careful of getting imperceptibility, high hiding limit, and robustness through its hybrid work by using a linguistic technique and a format-based technique. The linguistic technique is used to divide the cover text into embedding layers where each layer consists of a sequence of words that has a single part of speech detected by POS tagger, while the format-based technique is used to recolor the letters of a cover text with a near RGB color coding to embed 12 bits from the secret message in each letter which leads to high hidden capacity and blinds the embedding, moreover, the robustness is accomplished through a multi-layer embedding process, and the generated stego key significantly assists the security of the embedding messages and its size. The experimental results comparison shows that the purpose approach is better than currently developed approaches in providing an ideal balance between imperceptibility, high hiding limit, and robustness criteria.
Comparison and Analysis Of LDM and LMS for an Application of a SpeechCSCJournals
Most of the automatic speech recognition (ASR) systems are based on Guassian Mixtures model. The output of these models depends on subphone states. We often measure and transform the speech signal in another form to enhance our ability to communicate. Speech recognition is the conversion from acoustic waveform into written equivalent message information. The nature of speech recognition problem is heavily dependent upon the constraints placed on the speaker, speaking situation and message context. Various speech recognition systems are available. The system which detects the hidden conditions of speech is the best model. LMS is one of the simple algorithm used to reconstruct the speech and linear dynamic model is also used to recognize the speech in noisy atmosphere..This paper is analysis and comparison between the LDM and a simple LMS algorithm which can be used for speech recognition purpose.
OOP and Its Calculated Measures in Programming Interactivityiosrjce
This study examines the object oriented programming (OOP) and its calculated measures in
programming interactivity in Nigeria. It focused on the existing programming languages used by programmers
and examines the need for integrating programming interactivity with OOP. A survey was conducted to measure
interactivity amongst professionals using certain parameters like flexibility, interactivity, speed,
interoperability, scalability, dynamism, and solving real life problems. Data was gathered using questionnaire,
and analysis was carried out using frequency, percentage ratio, and mean in arriving at a more proactive stand.
The results revealed that the some of the parameters used are highly in support of the programming interactivity
with OOP.
Knowledge Multimedia Processes in Technology Enhanced LearningRalf Klamma
Presentation for the 1st International Workshop on Multimedia Technologies and Distant Learning at ACM Multimedia 2009
Ralf Klamma, Marc Spaniol, Matthias Jarke
Beijing, China, October 23, 2009
Linked Open (Geo)Data and the Distributed Ontology Language – a perfect matchChristoph Lange
The Distributed Ontology Language is a meta-language for integrating
ontologies written in different languages. Our notion of “distributed”
comprises logical heterogeneity within ontologies, modularity and reuse,
and links across ontologies in different places of the Web. Not only
can ontologies be distributed across the Web, but DOL's supply of
supported ontology languages can also be extended in a decentral way.
For this functionality, DOL builds on the Linked Open Data (LOD)
principles. But DOL also contributes to LOD use cases. Many current
LOD applications are limited by the weak expressivity of the RDF and
RDFS languages commonly used to express data and vocabularies.
Completely switching to a more expressive language would impair
scalability to big datasets. DOL addresses the scalability and
expressivity requirements by allowing to represent each aspect of a
dataset in the most suitable language and keeping these different
representations connected. This is particularly useful in geographic
information systems, where big datasets (e.g. Linked Geo Data, the LOD
version of OpenStreetMap) need to be integrated with formalisations of
complex spatial notions (e.g. in the first-order language Common Logic).
Interlinking Data and Knowledge in Enterprises, Research and Society with Lin...Christoph Lange
The Linked Data paradigm has emerged as a powerful enabler for data and knowledge interlinking and exchange using standardised Web technologies.
In this article, we discuss our vision how the Linked Data paradigm can be employed to evolve the intranets of large organisations -- be it enterprises, research organisations or governmental and public administrations -- into networks of internal data and knowledge.
In particular for large enterprises data integration is still a key challenge. The Linked Data paradigm seems a promising approach for integrating enterprise data. Like the Web of Data, which now complements the original document-centred Web, data intranets may help to enhance and flexibilise the intranets and service-oriented architectures that exist in large organisations. Furthermore, using Linked Data gives enterprises access to 50+ billion facts from the growing Linked Open Data (LOD) cloud. As a result, a data intranet can help to bridge the gap between structured data management (in ERP, CRM or SCM systems) and semi-structured or unstructured information in documents, wikis or web portals, and make all of these sources searchable in a coherent way.
Keynote at Baltic DB&IS 2014, 9 June 2014, Tallinn, Estonia
Health Information Exchange in the U.S. TodayGreenway Health
This presentation covers state HIE challenges, how Meaningful Use and HIEs work hand-in-hand, how HIEs are becoming more sustainable, and more about HIE initiatives.
International HL7 Interoperability Conference 2015 Presentation: DECOR Driven Framework for Rapid Development of HL7 CDA Document Editor Components of EHR Systems
7 Strategies to Improve HEDIS Scores and Star RatingsHealthx
In recent years, achieving high scores on HEDIS® measures and Medicare Star Ratings has taken on greater importance for health plans. What was once nice-to-have for marketing purposes has become a must-have for operating in certain lines of business. Here’s why: NCQA Health Plan Accreditation, financial bonuses, and even a plan’s ability to enroll members can be affected by their ratings. If HEDIS Scores and Star Ratings are so important, why don’t more plans work to improve them?
A presentation on the history and background of standards including: Standards & Standardization – What Are Standards?; Before Standardization; The Birth of Standardization; History of BSI; European Committee for Standardization (CEN); International Organization for Standardization (ISO); Types of Standard; The Standardization Process; The Economic Impact of Standardization; The Impact of Using Standards; Testing and Certification; CE and Kitemark®; Standards & Standardization; How to Get Involved; Standards & Standardization -Making a New Work Proposal; Standards Relevant to Digital Inclusion; Standards & Standardization - Further Reading.
Health Care Data Sets and their purpose
UHDDS, UACDS, MDS, OASIS, DEEDS and EMDS.
Explain the standardization data collection efforts.
Explain the five type of standards that need to be in place to implement the Nationwide Health Information Network (NHIN).
Standard Development Organizations
Evolving and Emerging Health Information Standards
WSO2 Guest Webinar - ESB meets IoT, a Primer on WSO2 Enterprise Service Bus (...Yenlo
This webinar looks into the Enterprise Service Bus (ESB), which is the core of a Service Oriented Architecture (SOA). After having discussed the need for an ESB and an SOA, we'll explain what WSO2 ESB has to offer and how it deals with messages.
After a brief introduction, we'll show you how WSO2 ESB can be used for Internet of Things applications, by using the example of a smart doorlock and a smart thermostat communicating through WSO2 ESB to, for instance, lower the thermostat temperature when the door is locked.
Ishan (WSO2) and Rob (Yenlo) will discuss the usage of WSO2 ESB for Internet of Things applications. Topics will be:
What WSO2 components do you need for the Internet of Things?
What deployment do you need for a large sensor network?
How do you analyze and display data?
Examples of WSO2-enabled Internet of Things solutions (e.g. Trimble’s Connected Plants)
See the recording of this WSO2 ESB webinar here: http://www.yenlo.com/en/web-esb-meets-iot
Project number: 224348
Project acronym: AEGIS
Project title: Open Accessibility Everywhere: Groundwork, Infrastructure, Standards
Starting date: 1 September 2008
Duration: 48 Months
AEGIS is an Integrated Project (IP) within the ICT programme of FP7
Prepare for a PHP job interview by quickly going through common questions and answers. This is an exhaustive list of questions along with answers, for more visit - https://www.edupro.xyz/php-oops-interview-questions/
Semantic Rules Representation in Controlled Natural Language in FluentEditorCognitum
Abstract. The purpose of this paper is to present a way of representation of semantic rules (SWRL) in controlled natural language (English) in order to facilitate understanding the rules by humans interacting with a machine. The rule representation is implemented in FluentEditor – ontology editor with controlled natural language (CNL). The representation can be used in a lot of domains where people interact with machines and use specialized interfaces to define knowledge in a system (semantic knowledge base), e.g. representing medical knowledge and guidelines, procedures in crisis management or in management of any coordination processes. Such knowledge bases are able to support decision making in any discipline provided there is a knowledge stored in a proper semantic way.
A Methodological Framework for Ontology and Multilingual Termontological Data...Christophe Debruyne
A Methodological Framework for Ontology and Multilingual Termontological Database Co-evolution
C. Debruyne, C. Vasquez, K. Kerremans, and A.D. Burgos
LNCS 7567, p. 220 ff.
Ontologies and Multilingual Termontology Bases (MTB) are two knowledge artifacts with different characteristics and different purposes. Ontologies are used to formally capture a shared view of the world to solve particular interoperability and reasoning tasks. MTBs are general, contain fewer types of relations and their purposes are to relate several term labels within and across different languages to cat- egories. For regions in which the multilingual aspect is vital, not only does one need an ontology for interoperability, the concepts in that ontology need to be comprehensible for everyone whose native tongue is one of the principal languages of that region. Multilinguality pro- vides also a powerful mechanism to perform ontology mapping, con- tent annotation, multilingual querying, etc. We intend to meet these challenges by linking both methods for constructing ontologies and MTBs, creating a virtuous cycle. In this paper, we present our method and tool for ontology and MTB co-evolution.
A Comparative Study Ontology Building Tools for Semantic Web Applications IJwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing,
especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms
and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all
possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely
available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d)
market status and penetration. The results of the review in ontologies are analyzed for each application area, such
as transport, tourism, personal services, health and social services, natural languages and other HCI-related
domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks.
Although each tool provides different functionalities, most of the users just use only one, because they are not able
to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different
ontologies with different development and management tools. The paper is also concerns the detection of
commonalities and differences between the examined ontologies, both on the same domain (application area) and
among different domains.
A Comparative Study Ontology Building Tools for Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.
A Comparative Study of Ontology building Tools in Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing,
especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms
and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all
possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely
available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d)
market status and penetration. The results of the review in ontologies are analyzed for each application area, such
as transport, tourism, personal services, health and social services, natural languages and other HCI-related
domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks.
Although each tool provides different functionalities, most of the users just use only one, because they are not able
to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different
ontologies with different development and management tools. The paper is also concerns the detection of
commonalities and differences between the examined ontologies, both on the same domain (application area) and
among different domains.
MLGrafViz: multilingual ontology visualization plug-in for ProtégéCSITiaesprime
Natural language processing (NLP) is rapidly increasing in all domains of knowledge acquisition to facilitate different language user. It is required to develop knowledge-based NLP systems to provide better results. Knowledge based systems can be implemented using ontologies where ontology is a collection of terms and concepts arranged taxonomically. The concepts that are visualized graphically are more understandable than in the text form. In this research paper, new multilingual ontology visualization plug-in MLGrafViz is developed to visualize ontologies in different natural languages. This plug-in is developed for Protégé ontology editor. This plug-in allows the user to translate and visualize the core ontology into 135 languages.
These are the slides from an introductory lesson to the KotlinNLP Natural Language Processing library, written entirely with kotlin and released as open source software, available at https://github.com/KotlinNLP
Development, distribution and use of open source software comprise a market of data (source code, bug reports, documentation, number of downloads, etc.) from projects, developers and users. This large amount of data makes it difficult for people involved to make sense of implicit links between software projects, e.g., dependencies, patterns, licenses. This context raises the question of what techniques and mechanisms can be used to help users and developers to link related pieces of information across software projects. In this paper, we propose a framework for a marketplace enhanced using linked open data (LOD) technology for linking software artifacts within projects as well as across software projects. The marketplace provides the infrastructure for collecting and aggregating software engineering data as well as developing services for mining, statistics, analytics and visualization of software data. Based on cross-linking software artifacts and projects, the marketplace enables developers and users to understand the individual value of components, their relationship to bigger software systems. Improved understanding creates new business opportunities for software companies: users will be better able to analyze and compare projects, developers can increase the visibility of their products, hosts may offer plug-ins and services over the data to paying customers.
Similar to Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed Ontology Language (DOL) (20)
Faire Datenökonomie für Wirtschaft, Wissenschaft und Gesellschaft: Was brauch...Christoph Lange
In Wirtschaft und Wissenschaft entstehen zunehmend Infrastrukturen für Datenaustausch. Der Wirtschaft ist Vertrauen unter Geschäftspartnern wichtig und Souveränität darüber, was Andere mit meinen Daten machen – die Wissenschaft betont freie Zugänglichkeit und Nachnutzbarkeit. FAIR Data Spaces verbinden beides auf Grundlage gemeinsamer Prinzipien.
Was muss getan werden, damit Datenaustausch nicht mehr bedeutet, E-Mail-Anhänge zu verschicken oder Geheimnisse zentralen Plattformen feindlicher Mächte anzuvertrauen? Wirtschaft, Wissenschaft und öffentliche Verwaltung suchen zunehmend nach Lösungen, um den Datenaustausch sicher und effizient zu gestalten und damit neues Innovationspotenzial zu heben. Was gibt es schon, was ist geplant, und wie können vorhandene Initiativen zusammenwachsen, um Daten über die Grenzen dieser Welten hinaus gemeinsam zu nutzen?
Initiativen der Wirtschaft wie Gaia-X und International Data Spaces priorisieren den Aufbau von Vertrauen unter Geschäftspartner:innen ohne Papier-Verträge sowie die Souveränität darüber, was Andere mit den eigenen wertvollen Daten machen. In der Wissenschaft, zum Beispiel bei der Nationalen Forschungsdateninfrastruktur NFDI, geht es um freie Zugänglichkeit und Nachnutzbarkeit im Einklang mit ethischen Prinzipien. Der öffentlichen Hand ist neben dem freien Zugang etwa zu Open-Data-Portalen die digitale Daseinsvorsorge wichtig. Große Herausforderungen unserer Zeit erfordern Datenaustausch nicht nur innerhalb dieser Welten, sondern über ihre Grenzen hinaus:
zum Beispiel zwischen Forschungsinstituten und kleinen Technologie-Unternehmen, die nicht alle Daten selbst sammeln können,
oder zwischen großen Unternehmen mit reichen Datenschätzen und wirtschaftlichen Interessen und einer Nutzung dieser Daten für das Gemeinwohl.
Das Projekt FAIR Data Spaces schafft Bausteine für übergreifende Datenräume als Keimzellen einer fairen Datenökonomie nach gemeinsamen Prinzipien. Wir möchten diskutieren, wie weit die aus dem Forschungsdatenmanagement stammenden FAIR-Data-Prinzipien tragen, wonach Daten findable (auffindbar), accessible (zugänglich), interoperabel und reusable (nachnutzbar) sein sollen. Das Projekt verfolgt den Plan, vorhandene Initiativen organisatorisch, rechtlich, technisch und praktisch zu einer gemeinsamen Community zusammenzuführen, und lebt dabei von einer breiten Mitwirkung. Werdet mit dem Fraunhofer IUK-Verbund Teil dieser Community und bleibt dabei innovativ und kritisch!
Linking Big Data to Rich Process DescriptionsChristoph Lange
Linked (Open) Data is one key to coping with Big Data: it enables decentralised, collaborative management of big datasets, low-overhead information retrieval, and scalable reasoning. Big Data are created or consumed by technical processes or business processes. Their formal description, e.g. for software verification or compliance checking, requires logics whose complexity far exceeds that of the data. Restricting LOD to the RDF logic does not allow for integrating rich process descriptions with the data that these processes create, and therefore does not enable knowledge management, information retrieval and reasoning to take full advantage of rich background knowledge. In this talk I demonstrate different frontiers at which I have worked towards achieving an integration of process descriptions and data.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Ontology Integration and Interoperability (OntoIOp) – Part 1: The Distributed Ontology Language (DOL)
1. Motivation Standard Organization Roadmap
Ontology Integration and Interoperability
(OntoIOp) – Part 1: The Distributed Ontology
Language (DOL)
IAOA/OOR/Ontolog “Ontologies and Standards” mini-series
Till Mossakowski, Oliver Kutz, Christoph Lange
Universität Bremen, Germany
2011-10-20
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 1
2. Motivation Standard Organization Roadmap
Interoperable Assistive Technology
Assistive technology increasingly relies on communication
among users,
between users and their devices, and
among these devices.
Making such ICT accessible and inclusive is costly or even
impossible
We aim at more interoperable
devices,
services accessing these devices, and
content delivered by these services
. . . at the levels of
data and metadata
data models and data modelling methods
metamodels as well as a meta ontology language
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 2
3. Motivation Standard Organization Roadmap
The Big Picture of Interoperability
Knowledge Infrastructure Service-Oriented Smart Environment
Architecture
Concepts/Data/Individuals Service Target (Device)
Device
rabil r
ity
fo
Ontology Service Description Target Description
inte ppings
rope
ma
Ontology Language/Logic Service Descr. Language Target Descr. Language
Data Concepts/Data/Individuals processes Service accesses Target (Device)
Device
represented in terms of satisfies conforms to
Models Ontology refers to Service Description Target Description
written in written in written in
Metamodels Ontology Language/Logic Service Descr. Language Target Descr. Language
Knowledge Software Agents Hardware
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 3
4. Motivation Standard Organization Roadmap
Overview of DOL (Distributed Ontology
Language)
In practical applications, one ontology language and one
logic doesn’t suffice to achieve semantic integration and
interoperability
Part 1 of the OntoIOp draft standard provides a meta-language
(DOL) for:
logically heterogeneous ontologies
modular ontologies
formal and informal links between ontologies/modules
annotation and documentation of ontologies
DOL will have a formal semantics and concrete XML, RDF and
text serializations
We leave services and devices to future parts of the standard
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 4
5. Motivation Standard Organization Roadmap
Requirements I
DOL should be generally applicable, open, and extensible
“generally applicable” = not restricted to one domain, nor to
foundational ontologies
“open” → language-/logic-agnostic
“extensible” → conformance criteria
DOL shall be a logic-agnostic metalanguage
structural elements: ontologies, modules, axioms
but not the content of axioms, as that is logic-specific –
we’ll borrow that from existing languages
→ links between ontologies
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 5
6. Motivation Standard Organization Roadmap
Requirements II
DOL should have user- and machine-readable serializations
for users: text
for machines: XML and RDF
literally include constructs from existing ontology languages as
far as technically possible
⇒ ability to reuse existing ontologies
DOL should have a well-defined formal, logic-based
semantics
criteria for logics to conform with DOL
translations between these logics (next slide)
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 6
7. Motivation Standard Organization Roadmap
The Onto-Logical Translation Graph
OBOOWL
OBO 1.4
bRDF
EL QL RL RDF
PL
OWL RDFS
DDLOWL
RDFSOWL grey: no fixed expressivity
green: decidable ontology languages
yellow: semi-decidable
ECoOWL
FOL=
orange: some second-order constructs
red: full second-order logic
CL ECoFOL Rel-S
F-logic
subinstitution
CASL theoroidal subinstitution
simultaneously exact and
FOLms= model-expansive comorphisms
HOL
model-expansive comorphisms
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 7
8. Motivation Standard Organization Roadmap
Requirements III
DOL should allow for expressing heterogeneous ontologies
e.g. an OWL ontology with some FOL axioms
. . . for use with an OWL reasoner, a FOL theorem prover, and a
FOL model finder
DOL should allow for expressing links between ontologies
formal/structural links
informal (statistical/heuristical) alignments
DOL
Common Common
OWL Logic Logic
ontology ontology
language interpretation
translation
import import
DOL DOL
OWL-XML CLIF
ASK-IT Ontologies: DOLCE
• Transportation Foundational
• Tourism Ontology
• Personal Support
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 8
9. Motivation Standard Organization Roadmap
Requirements IV
DOL should allow for writing down ontologies and ontology
links as implicitly as possible and as explicitly as needed
Examples for explicit information:
alignment computed by a matcher
translation path determined by lookup from the ontology graph
If you have access to these tools, you don’t need the information
⇒ keep it implicit
If you pass on the ontology to a co-developer, he may need it
⇒ make it explicit
DOL should allow for rich annotation and documentation of
ontologies
RDF(a)-compatible annotations
fine-grained intermixture of formalization and documentation
(literate programming)
We shall recommend a list of RDF vocabularies (OMV etc.)
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 9
10. Motivation Standard Organization Roadmap
Conformance Criteria I
DOL should work with any existing or future ontology
language (if the latter conforms!)
We shall establish the conformance of
OWL, Common Logic, RDFS (normative)
F-logic, UML class diagrams, OBO (informative)
Conformance of a logic (directly or by translation):
semantic conformance (institutions)
> entailment conformance (entailment system; useful to include
non-monotonic logics)
Conformance of a serialization:
XML conf. (annotation/markup up to literate programming)
> RDF conformance (annotation but no markup)
> text conformance (can still use special comments)
> standoff markup conformance (can still use XPointer)
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 10
11. Motivation Standard Organization Roadmap
Conformance Criteria II
Conformance of a document
(“Is this document a DOL ontology?”):
e.g. auto-identification of the ontology language used for an
axiom is possible – if there are no name clashes with other
ontology languages used in the same document
Conformance of an application:
A DOL-conforming application produces DOL-conforming
documents!
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 11
12. Motivation Standard Organization Roadmap
Organization and People
OntoIOp is WD (Working Draft) 17347
developed within ISO TC 37/SC 3/WG 3
(→ Sue Ellen Wright’s presentation)
Project team: Till Mossakowski, Oliver Kutz, Christoph Lange
(Bremen, Germany)
Secretary: Gottfried Herzog, DIN, Germany
So far we have registered experts from:
Austria, Belgium, Canada, China, Denmark, Spain, Finland,
Greece, Italy, Korea, Mexico, UK, US, South Africa
(bold: have been active so far)
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 12
13. Motivation Standard Organization Roadmap
Infrastructure and Resources
In the current phase we mainly use an unofficial community
infrastructure; in later phases we will more and more use Livelink
Mailing list: ontoiop-wg@interop.cim3.net
Archive at http://interop.cim3.net/forum/ontoiop-wg/
Community file repository (WebDAV):
http://interop.cim3.net/file/work/OntoIOp/
Working drafts (not including the source)
Meeting minutes, voting results, review comments
Relevant literature and other standards
Homepage: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntoIOp
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 13
14. Motivation Standard Organization Roadmap
Roadmap
Nov 2011: 2nd WD (Working Draft)
6 weeks review period (informal community feedback highly
appreciated)
23 Feb 2011: OntoIOp meeting in Berlin
Apr 2012: 3rd WD
6 weeks review period (informal community feedback highly
appreciated)
Jun 2012: ISO/TC 37 meeting in Madríd
Aug 2012: CD (Committee Draft)
3 months review and voting (more formal)
Aug 2013: DIS (Draft International Standard)
Feb 2015: FDIS (Final Draft International Standard)
Aug 2015: IS (International Standard)
Mossakowski/Kutz/Lange OntoIOP Part 1: Distributed Ontology Language (DOL) 2011-10-20 14