The presentation provides an overview of what an ontology is and how it can be used for representing information and for retrieving data with a particular focus on the linguistic resources available for supporting this kind of task. Overview of semantic-based retrieval approaches by highlighting the pro and cons of using semantic approaches with respect to classic ones. Use cases are presented and discussed
morning session talk at the second Keystone Training School "Keyword search in Big Linked Data" held in Santiago de Compostela.
https://eventos.citius.usc.es/keystone.school/
Translating Ontologies in Real-World SettingsMauro Dragoni
To enable knowledge access across languages, ontologies that are often represented only in English, need to be translated into different languages. The main challenge in translating ontologies is to find the right term with respect to the domain modeled by ontology itself. Machine translation services may help in this task; however, a crucial requirement is to have translations validated by experts before the ontologies are deployed. Real-world applications must implement a support system addressing this task for relieve experts work in validating all translations. In this paper, we present ESSOT, an Expert Supporting System for Ontology Translation. The peculiarity of this system is to exploit semantic information of the concept's context for improving the quality of label translations. The system has been tested both within the Organic.Lingua project by translating the modeled ontology in three languages and on other multilingual ontologies in order to evaluate the effectiveness of the system in other contexts. The results have been compared with the translations provided by the Microsoft Translator API and the improvements demonstrated the viability of the proposed approach.
Exploration, visualization and querying of linked open data sourcesLaura Po
afternoon hands-on session talk at the second Keystone Training School "Keyword search in Big Linked Data" held in Santiago de Compostela.
https://eventos.citius.usc.es/keystone.school/
The Web of Data: do we actually understand what we built?Frank van Harmelen
Despite its obvious success (largest knowledge base ever built, used in practice by companies and governments alike), we actually understand very little of the structure of the Web of Data. Its formal meaning is specified in logic, but with its scale, context dependency and dynamics, the Web of Data has outgrown its traditional model-theoretic semantics.
Is the meaning of a logical statement (an edge in the graph) dependent on the cluster ("context") in which it appears? Does a more densely connected concept (node) contain more information? Is the path length between two nodes related to their semantic distance?
Properties such as clustering, connectivity and path length are not described, much less explained by model-theoretic semantics. Do such properties contribute to the meaning of a knowledge graph?
To properly understand the structure and meaning of knowledge graphs, we should no longer treat knowledge graphs as (only) a set of logical statements, but treat them properly as a graph. But how to do this is far from clear.
In this talk, I report on some of our early results on some of these questions, but I ask many more questions for which we don't have answers yet.
morning session talk at the second Keystone Training School "Keyword search in Big Linked Data" held in Santiago de Compostela.
https://eventos.citius.usc.es/keystone.school/
Translating Ontologies in Real-World SettingsMauro Dragoni
To enable knowledge access across languages, ontologies that are often represented only in English, need to be translated into different languages. The main challenge in translating ontologies is to find the right term with respect to the domain modeled by ontology itself. Machine translation services may help in this task; however, a crucial requirement is to have translations validated by experts before the ontologies are deployed. Real-world applications must implement a support system addressing this task for relieve experts work in validating all translations. In this paper, we present ESSOT, an Expert Supporting System for Ontology Translation. The peculiarity of this system is to exploit semantic information of the concept's context for improving the quality of label translations. The system has been tested both within the Organic.Lingua project by translating the modeled ontology in three languages and on other multilingual ontologies in order to evaluate the effectiveness of the system in other contexts. The results have been compared with the translations provided by the Microsoft Translator API and the improvements demonstrated the viability of the proposed approach.
Exploration, visualization and querying of linked open data sourcesLaura Po
afternoon hands-on session talk at the second Keystone Training School "Keyword search in Big Linked Data" held in Santiago de Compostela.
https://eventos.citius.usc.es/keystone.school/
The Web of Data: do we actually understand what we built?Frank van Harmelen
Despite its obvious success (largest knowledge base ever built, used in practice by companies and governments alike), we actually understand very little of the structure of the Web of Data. Its formal meaning is specified in logic, but with its scale, context dependency and dynamics, the Web of Data has outgrown its traditional model-theoretic semantics.
Is the meaning of a logical statement (an edge in the graph) dependent on the cluster ("context") in which it appears? Does a more densely connected concept (node) contain more information? Is the path length between two nodes related to their semantic distance?
Properties such as clustering, connectivity and path length are not described, much less explained by model-theoretic semantics. Do such properties contribute to the meaning of a knowledge graph?
To properly understand the structure and meaning of knowledge graphs, we should no longer treat knowledge graphs as (only) a set of logical statements, but treat them properly as a graph. But how to do this is far from clear.
In this talk, I report on some of our early results on some of these questions, but I ask many more questions for which we don't have answers yet.
What Are Links in Linked Open Data? A Characterization and Evaluation of Link...Armin Haller
Linked Open Data promises to provide guiding principles to publish interlinked knowledge graphs on the Web in the form of findable, accessible, interoperable, and reusable datasets. In this talk I argue that while as such, Linked Data may be viewed as a basis for instantiating the FAIR principles, there are still a number of open issues that cause significant data quality issues even when knowledge graphs are published as Linked Data. In this talk I will first define the boundaries of what constitutes a single coherent knowledge graph within Linked Data, i.e., present a principled notion of what a dataset is and what links within and between datasets are. I will also define different link types for data in Linked datasets and present the results of our empirical analysis of linkage among the datasets of the Linked Open Data cloud. Recent results from our analysis of Wikidata, which has not been part of the Linked Open Data Cloud, will also be presented.
This 2-hour lecture was held at Amsterdam University of Applied Sciences (HvA) on October 16th, 2013. It represents a basic overview over core technologies used by ICT companies such as Google, Twitter or Facebook. The lecture does not require a strong technical background and stays at conceptual level.
This presentation was provided by Scott Ziegler of Louisiana State University during the NISO Virtual Conference, Open Data Projects, held on Wednesday, June 13, 2018.
Lotus: Linked Open Text UnleaShed - ISWC COLD '15Filip Ilievski
Abstract:
It is difficult to find resources on the Semantic Web today, in particular if one wants to search for resources based on natural language keywords and across multiple datasets.
In this paper, we present \lotus: Linked Open Text UnleaShed, a full-text lookup index over a huge Linked Open Data collection.
We detail \lotus' approach, its implementation, its coverage, and demonstrate the ease with which it allows the LOD cloud to be queried in different domain-specific scenarios.
This invited keynote at the Social Computing Track at WI-IAT21 gives an introduction to Knowledge Graphs and how they are built collaboratively by us. It gives also presents a brief analysis of the links in Wikidata.
The diversity and complexity of contents available on the web have dramatically increased in recent years. Multimedia content such as images, videos, maps, voice recordings has been published more often than before. Document genres have also been diversified, for instance, news, blogs, FAQs, wiki. These diversified information sources are often dealt with in a separated way. For example, in web search, users have to switch between search verticals to access different sources. Recently, there has been a growing interest in finding effective ways to aggregate these information sources so that to hide the complexity of the information spaces to users searching for relevant information. For example, so-called aggregated search investigated by the major search engine companies will provide search results from several sources in a single result page. Aggregation itself is not a new paradigm; for instance, aggregate operators are common in database technology.
This talk presents the challenges faced by the like of web search engines and digital libraries in providing the means to aggregate information from several and complex information spaces in a way that helps users in their information seeking tasks. It also discusses how other disciplines including databases, artificial intelligence, and cognitive science can be brought into building effective and efficient aggregated search systems.
Keystone summer school 2015 paolo-missier-provenancePaolo Missier
Lecture on Provenance modelling, given at the first Keystone Summer School, Malta July 2015.
With thanks to Prof. Luc Moreau for contributing some of the slide material from his own tutorial
What Are Links in Linked Open Data? A Characterization and Evaluation of Link...Armin Haller
Linked Open Data promises to provide guiding principles to publish interlinked knowledge graphs on the Web in the form of findable, accessible, interoperable, and reusable datasets. In this talk I argue that while as such, Linked Data may be viewed as a basis for instantiating the FAIR principles, there are still a number of open issues that cause significant data quality issues even when knowledge graphs are published as Linked Data. In this talk I will first define the boundaries of what constitutes a single coherent knowledge graph within Linked Data, i.e., present a principled notion of what a dataset is and what links within and between datasets are. I will also define different link types for data in Linked datasets and present the results of our empirical analysis of linkage among the datasets of the Linked Open Data cloud. Recent results from our analysis of Wikidata, which has not been part of the Linked Open Data Cloud, will also be presented.
This 2-hour lecture was held at Amsterdam University of Applied Sciences (HvA) on October 16th, 2013. It represents a basic overview over core technologies used by ICT companies such as Google, Twitter or Facebook. The lecture does not require a strong technical background and stays at conceptual level.
This presentation was provided by Scott Ziegler of Louisiana State University during the NISO Virtual Conference, Open Data Projects, held on Wednesday, June 13, 2018.
Lotus: Linked Open Text UnleaShed - ISWC COLD '15Filip Ilievski
Abstract:
It is difficult to find resources on the Semantic Web today, in particular if one wants to search for resources based on natural language keywords and across multiple datasets.
In this paper, we present \lotus: Linked Open Text UnleaShed, a full-text lookup index over a huge Linked Open Data collection.
We detail \lotus' approach, its implementation, its coverage, and demonstrate the ease with which it allows the LOD cloud to be queried in different domain-specific scenarios.
This invited keynote at the Social Computing Track at WI-IAT21 gives an introduction to Knowledge Graphs and how they are built collaboratively by us. It gives also presents a brief analysis of the links in Wikidata.
The diversity and complexity of contents available on the web have dramatically increased in recent years. Multimedia content such as images, videos, maps, voice recordings has been published more often than before. Document genres have also been diversified, for instance, news, blogs, FAQs, wiki. These diversified information sources are often dealt with in a separated way. For example, in web search, users have to switch between search verticals to access different sources. Recently, there has been a growing interest in finding effective ways to aggregate these information sources so that to hide the complexity of the information spaces to users searching for relevant information. For example, so-called aggregated search investigated by the major search engine companies will provide search results from several sources in a single result page. Aggregation itself is not a new paradigm; for instance, aggregate operators are common in database technology.
This talk presents the challenges faced by the like of web search engines and digital libraries in providing the means to aggregate information from several and complex information spaces in a way that helps users in their information seeking tasks. It also discusses how other disciplines including databases, artificial intelligence, and cognitive science can be brought into building effective and efficient aggregated search systems.
Keystone summer school 2015 paolo-missier-provenancePaolo Missier
Lecture on Provenance modelling, given at the first Keystone Summer School, Malta July 2015.
With thanks to Prof. Luc Moreau for contributing some of the slide material from his own tutorial
TwitIE: An Open-Source Information Extraction Pipeline for Microblog TextLeon Derczynski
Code: http://gate.ac.uk/wiki/twitie.html
Paper: https://gate.ac.uk/sale/ranlp2013/twitie/twitie-ranlp2013.pdf
Twitter is the largest source of microblog text, responsible for gigabytes of human discourse every day. Processing microblog text is difficult: the genre is noisy, documents have little context, and utterances are very short. As such, conventional NLP tools fail when faced with tweets and other microblog text. We present TwitIE, an open-source NLP pipeline customised to microblog text at every stage. Additionally, it includes Twitter-specific data import and metadata handling. This paper introduces each stage of the TwitIE pipeline, which is a modification of the GATE ANNIE open-source pipeline for news text. An evaluation against some state-of-the-art systems is also presented.
Ontological approach for improving semantic web search resultseSAT Journals
Abstract we propose personalized information system to provide more user-oriented information considering context information such as a personal profile based, location based and click based. Our system can provide associated search results from relations between the objects using context ontologies modeling created by the categorized layers data. Based on the similarities between item descriptions and user profiles, and the semantic relations between concepts, content based and collaborative recommendation models are supported by the system. User defined rules based on the ontology description of the service and interoperates within any service domain that has an ontology description. Keywords- Ontology; Semantic Web; RDF;OWL
Adding Semantic Edge to Your Content – From Authoring to DeliveryOntotext
Within the last few years we see and ever increasing demand for more accurate user specific content which on the other hand overwhelms content providers.This is where smart publishing platforms come into play. They aim at bringing the right content at the right time – digested, easy to comprehend, fast to navigate, and tailored to the readers’ personal interests.
The technologies that power them help publishers to automate the metadata enrichment process, making it more consistent, accurate and rich.
study or concern about what kinds of things exist
what entities there are in the universe.
the ontology derives from the Greek onto (being) and logia (written or spoken). It is a branch of metaphysics , the study of first principles or the root of things.
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
Association Rule Mining Based Extraction of Semantic Relations Using Markov ...dannyijwest
Ontology may be a conceptualization of a website into a human understandable, however machine-
readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the
intentional aspects of a site, whereas the denotative part is provided by a mental object that contains
assertions about instances of concepts and relations. Semantic relation it might be potential to extract the
whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations
describe the linguistics relationships among the entities involve that is beneficial for a higher
understanding of human language. The relation can be identified from the result of concept hierarchy
extraction. The existing ontology learning process only produces the result of concept hierarchy extraction.
It does not produce the semantic relation between the concepts. Here, we have to do the process of
constructing the predicates and also first order logic formula. Here, also find the inference and learning
weights using Markov Logic Network. To improve the relation of every input and also improve the relation
between the contents we have to propose the concept of ARSRE.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Ontology Based Approach for Semantic Information Retrieval SystemIJTET Journal
Abstract—The Information retrieval system is taking an important role in current search engine which performs searching operation based on keywords which results in an enormous amount of data available to the user, from which user cannot figure out the essential and most important information. This limitation may be overcome by a new web architecture known as the semantic web which overcome the limitation of the keyword based search technique called the conceptual or the semantic search technique. Natural language processing technique is mostly implemented in a QA system for asking user’s questions and several steps are also followed for conversion of questions to the query form for retrieving an exact answer. In conceptual search, search engine interprets the meaning of the user’s query and the relation among the concepts that document contains with respect to a particular domain that produces specific answers instead of showing lists of answers. In this paper, we proposed the ontology based semantic information retrieval system and the Jena semantic web framework in which, the user enters an input query which is parsed by Standford Parser then the triplet extraction algorithm is used. For all input queries, the SPARQL query is formed and further, it is fired on the knowledge base (Ontology) which finds appropriate RDF triples in knowledge base and retrieve the relevant information using the Jena framework.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
Keynote given at ISWC 2019 Semantic Management for Healthcare WorkshopMauro Dragoni
Automatically monitoring and supporting healthy lifestyle is a recent research trend, fostered by the availability of low-cost monitoring devices, and it can significantly contribute to the prevention of chronic diseases deriving from incorrect diet and lack of physical activity. In this talk I will present the HORUS.AI platform: an AI-based platform built upon the integration of semantic web technologies and persuasive techniques for motivating people to adopt healthy lifestyle or for supporting them to cope with the self-management of chronic diseases. The platform collects data from users’ devices, explicit users’ inputs, or from the external environment (e.g. facts of the world) and interacts with users by using a goal-based metaphor. Interactive dialogues are used for proposing set of challenges to users that, through a mobile application, are able to provide the required information and to receive contextual motivational messages helping them to achieve the proposed goals. HORUS.AI is constituted by two main layers: the Knowledge and the Dialog-Based Persuasive layers. The Knowledge Layer contains the knowledge bases modeling the specific domains for which users are monitored (e.g. diet), the rules provided by domain-experts, and the RDF-based reasoner that combines the modeled knowledge with the users’ generated data. The results produced by reasoning operations are coded into motivational strategies and messages by the Dialog-based Persuasive Layer. The Dialog-based Persuasive Layer creates and manages dialogues and generates motivational messages based on the information provided by the Knowledge Layer and learned from previous users’ behavior. This way, messages are tailored to specific users. These two layers are supported by an Input/Output Layer exploited for directly communicating with users (i.e. dedicated mobile application or social media channels) by providing summaries of the acquired data, the chat containing the interactions between the users and the system, and graphical items showing the users’ statuses with respect to their goals. HORUS.AI has been validated within the context of different territorial labs and projects and the observed results demonstrated the suitability of HORUS.AI in real-world scenarios.
Exploiting Multilinguality For Creating Mappings Between ThesauriMauro Dragoni
The definition of mappings between multilingual thesauri is a recent research topic concerning the application of the traditional schema mapping algorithms in conjunction with the use of multilingual resources.
In this paper, we present a multilingual mapping approach aiming at defining matches between terms belonging to
multilingual thesauri. The paper presents the approach as a variant of the schema mapping problem and discusses its evaluation on (i) domain-specific use cases and (ii) on a standard benchmark, namely MultiFarm benchmark, used for measuring the effectiveness of multilingual ontology mapping systems.
The widespread adoption of Information Technology systems and their
capability to trace data about process executions has made available Information
Technology data for the analysis of process executions. Meanwhile, at business
level, static and procedural knowledge, which can be exploited to analyze and rea-
son on data, is often available. In this paper we aim at providing an approach that,
combining static and procedural aspects, business and data levels and exploiting
semantic-based techniques allows business analysts to infer knowledge and use it
to analyze system executions. The proposed solution has been implemented using
current scalable Semantic Web technologies, that offer the possibility to keep the
advantages of semantic-based reasoning with non-trivial quantities of data.
Authoring OWL 2 ontologies with the TEX-OWL syntaxMauro Dragoni
This work describes a new syntax that can be used to write OWL 2 ontologies. The syntax, which is known as TEX-OWL, was developed to address the need for an easy-to-read and easy-to-write plain text syntax. TEX-OWL is
inspired by LaTeX syntax, and covers all construct of OWL 2.
We designed TEX-OWL to be less verbose than the other OWL syntaxes, and easy-to-use especially for quickly developing small-size ontologies with just a text editor.
The important features of the syntax are discussed in this work, and a reference implementation of a Java-based parser and writer is described.
A Fuzzy Approach For Multi-Domain Sentiment AnalysisMauro Dragoni
An emerging field within Sentiment Analysis concerns the investigation about how sentiment polarities towards concepts have to be adapted with respect to the different domains in which they are used. In this paper, we explore the use of fuzzy logic for modeling concept polarities, and the uncertainty associated with them, with respect to different domains. The approach is based on the use of a knowledge graph built by combining two linguistic resources, namely WordNet and SenticNet. Such a knowledge graph is then exploited by a graph-propagation algorithm that propagates sentiment information learned from labeled datasets. The system implementing the proposed approach has been evaluated on the Blitzer dataset by demonstrating its viability in real-world cases.
Using Semantic and Domain-based Information in CLIR SystemsMauro Dragoni
Cross-Language Information Retrieval (CLIR) systems extend classic information retrieval mechanisms for allowing users to query across languages, i.e., to retrieve documents written in languages different from the language used for query formulation.
In this paper, we present a CLIR system exploiting multilingual ontologies for enriching documents representation with multilingual semantic information during the indexing phase and for mapping query fragments to concepts during the retrieval phase.
This system has been applied on a domain-specific document collection and the contribution of the ontologies to the CLIR system has been evaluated in conjunction with the use of both Microsoft Bing and Google Translate translation services.
Results demonstrate that the use of domain-specific resources leads to a significant improvement of CLIR system performance.
Multilingual Knowledge Organization Systems Management: Best PracticesMauro Dragoni
This presentation addresses the most well-known challenges in managing multilingual knowledge organization systems.
Such challenges are presented and it is discussed how they have been addressed with the implementation of a collaborative tool called MoKi.
Collaborative Modeling of Processes and Ontologies with MoKiMauro Dragoni
The objective of this framework is to sustain and encourage the collaboration between different kind of experts for modeling domains and for providing a semantic representation of the it. Examples of experts are the Domain Experts (i.e. those that knows the domain but usually lacks the modelling skills), and the Knowledge Engineers (those that have the skills but have not a clear understanding of the domain). During this talk, I will present the last version of MoKi, the wiki-based tool designed for supporting such a framework and I will show how this tool has been customized and extended in several projects in order to face the different challenges raised by the usage of semantic representations in different domains.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
Keystone Summer School 2015: Mauro Dragoni, Ontologies For Information Retrieval
1. Ontologies and their use in
Information Retrieval
Mauro Dragoni
Fondazione Bruno Kessler (FBK), Shape and Evolve Living Knowledge Unit (SHELL)
https://shell.fbk.eu/index.php/Mauro_Dragoni - dragoni@fbk.eu
KEYSTONE Training School, Malta
July, 20th 2015
2. Outline
1. On your marks and get set…
2. A general approach: pros and cons of concept-based structured
representations
3. Ontology-based IR platforms
4. Behind the lines
a) Cross-language Information Retrieval
b) Ontology Matching
3. Before to start…
What is an ontology?
What is a machine-readable dictionary?
What about ambiguity?
Terms vs. concepts, is everything clear?
4. What is an ontology?
“the branch of philosophy which deals with the nature and the organization
of reality”
“an ontology is an explicit specification of a conceptualization”
[Gruber1993]
conceptualization: abstract model of the world
explicit specification: model described by using unambiguous language
domain ontology
upper ontology
example: DOLCE [Guarino2002]
5. Ontology Components
Classes: entities describing objects common characteristics (for example:
“Agricultural Method”).
Individuals: entities that are instances of classes (for example “Multi Crops
Farming” is an instance of “Agricultural Method”).
Properties: binary relations between entities (for example “IsAffectedBy”).
Attributes (or DataType Properties): characteristics that qualify individuals
(for example “Has Name”).
6. Hierarchies
Concepts can be organized in subsumptions hierarchies
Meaning: every sub-concepts is also a super-concept
Examples:
“Intensive Farming” is-a “Agricultural Method”
“Agricultural Method” is-a “Method”
Concept hierarchies are generally represented by using tree structures
7. Attributes and Properties
Properties: binary relations between classes
Domain and co-domain: classes to which individuals need to belong to be in
relation
Example: “Agriculture” <isAffectedBy> “Agriculture Pollution”
Attributes: binary relations between an individual and values (not other
entities)
Domain: class to which the attribute is applied
Co-domain: the type of the value (for example “String”)
Properties and Attributes can be organized in hierarchies.
8. Steps for building an ontology
To identify the classes of the domain.
To organize them in a hierarchy.
To define properties and attributes.
To define individuals, if there are.
9. Why ontologies are useful?
Ontologies provide:
common dictionary of terms;
a shared and formal interpretation of the domain.
Ontologies permit to:
solve ambiguities;
share knowledge (not only between humans, but also between machines);
use automatic reasoning techniques.
10. Use of ontologies in IR
Exploit metadata
Entity linking
“which president …” “Barack Obama is-a President”
Extraction of triples from text
applying NLP parsers for extracting dependencies
11. What is an thesaurus?
A “coarse” version of ontologies
Generally, 3 kinds of relations are represented:
hierarchical (generalization/specialization)
equivalence (synonymity)
associative (other kind of relationships)
Extensive tool used for query expansion approaches [Bhogal2007,
Grootjen2006,Qiu1993,Mandala2000]
12. Machine-readable dictionaries
A dictionary in an electronic form.
The power of MRD is characterized by word senses. [Kilgariff1997,
Lakoff1987, Ruhl1989]
Identity of meaning: synonyms [Gove1973]
Inclusion of meaning: hyponymy or hyperonymy; troponymy [Cruse1986,
Green2002, Fellbaum1998]
transitive relationship
Part-whole meaning: meronymy (has part), holonymy (part of)
[Green2002, Cruse1986, Evens1986]
Opposite meaning: antonymy
13. and now…
… let’s see how we can exploit this within
an information retrieval system…
14. Motivations and Challenges
Considering how information is usually represented and classified.
Documents and Queries are represented using terms.
Indexing:
terms are extracted from each document;
terms frequency of each document is computed (TF);
terms frequency over the entire index is computed (IDF).
Searching:
the vector space model is used to computed the similarity between documents and
queries;
queries are generally expanded to increase the recall of the system.
15. Drawbacks of the
Term-Based representation – 1/2
The “semantic connections” between terms in documents and queries are
not considered.
Different vector positions may be allocated to the synonyms of the same
term:
the importance of a determinate concept is distributed among different vector
components;
information loss.
16. Drawbacks of the
Term-Based representation – 2/2
The query expansion has to be used carefully.
It is more easy to increase the recall of a system with respect to its precision.
Which is better? [Abdelali2007]
In the worst case, the size of a document vector could be close to the
number of terms used in the repository:
in general, the number of concepts is less than the number of words;
the time needed to compare documents is higher;
17. Intuition Behind
Using concepts to represent the terms contained in documents and
queries. [Dragoni2012b]
1. Documents and Queries may be represented in the same way.
2. The issue related to how many and which terms have to be used for query
expansion is not considered.
3. The size of a concept vector is generally smaller than the size of a term vector.
IMPORTANT: This is not a query expansion technique !!!
20. how to compute concept weights?
a first simple example …
21. how is weighted each concept of the vocabulary?
suppose to have the document “xxyyyz”
a first simple example …
22. … that we evaluated
Experiments on the MuchMore Collection (http://muchmore.dfki.de)
The collection contains numerous medical terms.
The term-based representations is advantaged over the semantic
representation.
Experiments on the TREC Ad-Hoc Collection:
Results have been compared with the IRS presented at TREC-7 and TREC-8
conference
Only the systems that implements a semantic representation of queries have
been considered.
Over dozens of runs, the three systems that performs better at recall 0.0 have
been chosen. [Spink2006]
28. Some considerations
Two drawbacks have been identified:
The absence of some terms in the ontology, (in particular terms related to
specific domains like biomedical, mechanical, business, etc.), may affects the
final retrieval result.
a more complete knowledge base is needed.
Term ambiguity. By using a Word Sense Disambiguation approach, concepts
associated with incorrect senses would be discarded or weighted less.
a Word Sense Disambiguation algorithm is required: but it has to be used carefully.
31. Checkpoint 1
the use of machine-readable dictionaries is suitable for implementing a
first semantic engine
but if we use ontologies we have more and more information
properties
attributes
the problem is: how can we exploit all these information?
32. Ontology enhanced IR
Enrichment of documents (and queries) with information coming from
semantic resources
information expansion: adding synonyms, antonyms, … not new but still helpful
annotations: relation or association between a semantic entity and a document
Most of the information expansion systems are based on WordNet and the
Roget’s Thesaurus
Systems using annotations are interfaced with the Linked Open Data
cloud, and mainly with Freebase and Wikipedia
33. Classification of
Semantic IR approaches
Criterion Approaches
Semantic
knowledge
representation
• Statistical [Deerwester1990]
• Linguistic conceptualization [Gonzalo1998,
Mandala1998,Giunchiglia2009]
• Ontology-based [Guha2003,Popov2004]
Scope • Web search [Finin2005,Fernandez2008]
• Limited domain repositories [Popov2004]
• Desktop search [Chirita2005]
Query • Keyword query [Guha2003]
• Natural language query [Lopez2009]
• Controlled natural language query [Bernstein2006, Cohen2003]
• Structured query based on ontology query language [notes]
Content
retrieved
• Data retrieval
• Information retrieval
Content
ranking
• No ranking
• Keyword-based ranking [Guha2003]
• Semantic-based ranking [Stojanovic2003]
34. Limitation of Semantic
IR approaches – 1/2
Criterion Limitation IR Semantic
Semantic knowledge
representation
• No exploitation of the full
potential of an ontological
language, beyond those that
could be reduced to
conventional classification
schemes.
x (Partially)
Scope • No scalability to large and
heterogeneous repositories of
documents.
x
Goal • Boolean retrieval models where
the Information Retrieval
problem is reduced to a data
retrieval task.
x
Query • Limited usability x
35. Limitation of Semantic
IR approaches – 2/2
Criterion Limitation IR Semantic
Content retrieved • Focus on textual content: no
management of different formats
(multimedia)
(Partially) (Partially)
Content ranking • Lack of semantic ranking
criterion. The ranking (if provided
relies on keyword-based
approaches.
x x
Coverage • Knowledge incompleteness.
[Croft1986]
(Partially) x
Evaluation • Lack of standard evaluation
frameworks. [Giunchiglia2009]
x
36. A basic ontology-based IR model
SPARQL
Editor
SPARQL
Query
Query
Processing
Searching
Indexing
Ranking
Semantic
Entities
Semantic Knowledge
(ontology + KB)
Document Corpus
Ranked
Documents
Semantic Index
(weighted annotations)
User
Unsorted
Documents
37. Basic ontology-based IR model - Limits
Heterogeneity
a single ontologies (but also a set of them) cannot covers all possible domains
Scalability
imagine to annotate the Web by using all knowledge bases currently available
a final solution does not exist… but nice and practical approaches can be used
Usability
try to think… are all the people you know able to write queries in SPARQL?
38. Extended ontology-based IR model
Nat. Lang.
Interface
Natural Lang.
Query
Query
Processing
Searching
Indexing
Ranking
Semantic
Entities
Preprocessed
Semantic Knowledge
Unstructured Web
contents
Ranked
Documents
Semantic Index
(weighted annotations)
User
Unsorted
Documents
Semantic
Web
39. Evaluation Results
Mean Average Precision
Prec@10
Semantic System Lucene TREC Automatic
0.16 0.1 0.2
Semantic System Lucene TREC Automatic
0.37 0.25 0.30
40. A focus on the indexing procedure
Challenge: to link semantic knowledge with documents and query in an
efficient and effective way:
document corpus and semantic knowledge should remain decoupled;
annotations have to be provided in a flexible and scalable way.
Annotations can be provided in two ways:
by applying an information extraction technique based on pure NLP
approaches;
by applying a contextual semantic information approach.
41. Annotator Requirements
Identification of the entities within the documents
conceptually, it is not so much different w.r.t. a traditional IR indexing process
Ontologies must not be touched (decoupling)
Should be open-domain
Scalable-friendly:
indexing of ontologies;
indexing of documents;
an interesting alternative: usage of non-embedded annotations
46. An idea for aggregating rankings
Multi-dimensional aggregation criteria
Document score is computed from different perspectives (criteria)
Assignment of priorities to criteria
Compute criteria weights
Weight of criteria with low priority depends on the score of criteria with high
priority
Aggregate criteria scores [Dragoni2012]
47. Querying and Ranking
Queries transformed by mapping terms with ontology entities
Contextual disambiguation is very important
simple example: “Rock musicians Britain”
Ranking: two options
to evaluate only the “matches” between detected entities
to aggregate (on your way) rank produced by using only the entities, only the
query terms, and/or both of them
48. Use of multiple ontologies
What we need: an Ontology Gateway
Tasks of an ontology gateway:
collect available semantic content;
store the semantic content efficiently in order to ease its access;
implement and approach for the “selection” of the content
Most important ontology gateways online:
Swoogle [Ding2004,Brin1998]
Watson [Aquin2007,Aquin2007b]
WebCORE [Fernandez2006,Fernandez2007]
49. Use of multiple ontologies - opportunities
Recall improvement:
Ontology 1 focused on entities stress on the identification of semantic
entities within the document
Ontology 2 focused on properties stress on the identification of relationships
between entities in the document
precision should also increase, but some drops are possible.
Supporting multiple perspectives:
analysis of each entities from different point of views
50. Use of multiple ontologies - challenges
To figure out how to use them:
it is necessary to formally represent the relationships between the ontologies
and the techniques used for extracting information from them;
example: you may have ontologies describing the same domain by using
different structures!!!
To find suitable ontologies and mappings:
again: more than one ontologies describing the same domain;
not a good practice to select only one build mappings!!!
51. A use case
Information system containing products technical data
users look for something that satisfies their needs
engineers want to exploit information for creating new product variants
Ontologies focused on particular aspects of products
product conceptualizations are separated
57. Checkpoint 2
Annotation of documents is more important than the querying of the
repositories… why?
differences in the amount of content
once we have decided how to annotate documents, queries should be
annotated by using the same procedure in order to homogenize the process
Challenges in built knowledge bases
Ranking… play with them and “stress your creativity”
58. Ontologies and IR – 2 use cases
Demonstrate the usefulness of semantic approaches used in combination
with traditional IR techniques.
Show how IR and Semantics may help each other
Two scenarios:
Cross-language information retrieval [Dragoni2014]
Ontology matching [Dragoni2015]
Sentiment analysis
59. Cross-Language Information Retrieval
Background - Challenges
Out-of-Vocabulary issue
improve the corpora used for training the machine translation model.
usage of domain information for increasing the coverage of the
dictionaries.
Usage of semantic artifacts for structuring the representation of
(multilingual) documents.
GOAL: to integrate domain-specific semantic knowledge
within a CLIR system and evaluate their effectiveness
60. Our Scenario
Use case: the agricultural domain
Knowledge resources: Agrovoc and Organic.Lingua ontologies
3 components used in the proposed approach:
Annotator
Indexer
Retriever
62. en
es
it
de
fr
….
Document content is used as query.
Between the candidate results, only “exact matches” are
considered.
Annotation Process – Step 2
64. Approach - Index
Given a document:
Text and annotations are extracted.
The context of each concept is retrieved from the ontologies.
Each contextual concepts are indexed with a weight proportional
w.r.t. their semantic distance from the semantic annotation.
Structure of each index record:
65. Approach - Retriever
Three retrieval configurations available:
Only translations: query terms are translated by using machine
translation services.
Semantic expansion by exploiting the domain ontology: query terms
are matched with ontology concepts; if an exact match exists, query
is expanded by using the URI of the concept and the URIs of the
contextual ones.
Ontology matching only: terms not having an exact match with
ontology concepts are discarded.
66. Evaluation - Setup
Collection of 13,000 multilingual documents.
48 queries originally provided in English and manually translated
in 12 languages under the supervision of both domain and
language experts.
Gold standard manually built by the domain experts.
MAP, Prec@5, Prec@10, Prec@20, Recall have been used.
69. Ontology Matching
Given two thesauri/ontologies/vocabularies find alignments between
entities
Formally a “match” may be represented with the following 5-tuple:
‹ id, e1, e2, R, c ›
Extensive literature about matching approaches (early ‘80s)
70. Motivations
Need: a system, for experts, able to suggest possible matches between
concepts
Exploit multilinguality… why?
allows to reduce ambiguity: the probability, for two different concepts, of having
the same label across several languages is very low.
term translations have been adapted to the domain: experts in charge of
translations put a lot of their cultural heritage in choosing the right terms for
each concept.
71. The Proposed Approach - 1
Inspired by information retrieval techniques
Built on top of the Lucene search engine
For each element of the thesaurus a structured multilingual representation
is built:
An index for each thesaurus is built
[prefLabel] "Food chains"@en
[prefLabel] "Catene alimentari"@it
[altLabel] "Food distributions"@en
[altLabel] "Reti alimentari"@it
label-en: “food chain”
label-en: “food distribution”
label-it: “catena alimentare”
label-it: “rete alimentare”
72. The Proposed Approach - 2
How matches are suggested?
source and target thesauri are chosen
for each concept, a query is performed from the source to the target thesaurus
the standard Lucene scoring formula is used for computing the ranking
for each query, a ranking of 5 suggestions is provided to the user
73. Evaluation Set-Up
2 contexts:
six multilingual thesauri (3 medical domain, 3 agricultural domain)
adapted Multifarm benchmark
2 tasks:
matching system (only the first suggestion is considered)
suggestion system
76. Results - 3
System Name Precision Recall F-Measure
IRBOM 0.68 0.43 0.53
WeSeE (2012) 0.61 0.32 0.41
RiMOM (2013) 0.52 0.13 0.21
YAM++ (2013) 0.51 0.36 0.40
YAM++ (2012) 0.50 0.36 0.40
AUTOMSv2 (2012) 0.49 0.10 0.36
Results obtained by all systems on the adapted Multifarm Benchmark
77. So… at the end…
Ontologies in IR is still a controversial topic
Personal Opinion: to combine structured and unstructured representation
seems to be the most suitable solution
Pay attention to the kind of queries performed by users
Aggregation of results
Be brave… try to work with triples!!!!
[Ruhl1989] C. Ruhl. On Monosemy: A study in linguistic semantics. State University of New York Press, Albany, NY, 1989.
[Gove1973] P.B. Gove. Webster’s New Dictionary of Synonyms. G. & C. Merriam Company, Springfield, MA, 1973.
[Cruse1986] A.D. Cruse. Lexical Semantics. Cambridge University Press, 1986.
[Green2002] R. Green, C.A. Bean, and S.H. Myaeng. The Semantics of Relationships: An Interdisciplinary Perspective. Cambridge University Press, 2002.
[Fellbaum1998] C. Fellbaum, editor. WordNet: An Electonic Lexical Database. MIT Press, 1998.
[Evens1986] M.W. Evens. Relational Models of the Lexicon. Cambridge University Press, 1986.