The concept of digital library revolutionized its popularity with the development of networking technology. Digital library stores various kind of documents in digitized format that enables user smooth access to these documents at subsidized costs. In the recent past, a similar concept i.e., ontology library has gained popularity among the communities like semantic web, artificial intelligence, information science, philosophy, linguistics, and so forth.
Postulate Approach to Library Classification
Normative Principles
Three Planes of Work
Modes of Formation of Subjects
Systems Approach to the Study of Subjects
Depth Classification
Classification in Electronic Environment
Classificatory basis for metadata
Knowledge Organization
Standards to facilitate information exchange has always been a subject of concern.
To provide a flexible exchange format that could be used for converting data from libraries and information services of all types, UNESCO developed the Common Communication Format (CCF). The main aim of this format was to produce a method of organising bibliographic descriptions which could be exchanged between institutions. This format was to act as a link between the databases produced in different internal formats of libraries.
Postulate Approach to Library Classification
Normative Principles
Three Planes of Work
Modes of Formation of Subjects
Systems Approach to the Study of Subjects
Depth Classification
Classification in Electronic Environment
Classificatory basis for metadata
Knowledge Organization
Standards to facilitate information exchange has always been a subject of concern.
To provide a flexible exchange format that could be used for converting data from libraries and information services of all types, UNESCO developed the Common Communication Format (CCF). The main aim of this format was to produce a method of organising bibliographic descriptions which could be exchanged between institutions. This format was to act as a link between the databases produced in different internal formats of libraries.
A presentation on Interoperability in Digital Libraries by Rupesh Kumar A, Assistant Professor, Department of Studies and Research in Library and Information Science, Tumkur University, Tumakuru, Karnataka, India.
Sears List of Subject Headings, first published by Minnie Earl Sears in 1923, has served as a standard authority list for subject cataloging in small and medium-sized libraries, delivering a basic list of essential headings, together with patterns and examples to guide the cataloger in creating further headings.
All types of libraries /information centres are organized to provide some basic services which are rendered either in anticipation or on demand from the users. The information services provided in anticipation are termed as alerting services as this alert the users about the new information of their interest. Broadly speaking the same is also termed as current awareness service . The primary aim of any library is to provide timely and quality services to its users
(a) Text: notes, captions, subtitles, contents, indexes.
(b) Data: tables, charts, graphs, spreadsheets.
(c) Graphics: drawings, prints, maps, etc.
(d) Photographic images : negatives, slides, prints .
(e) Animation: including both computer generated, video, etc.
(f) Audio: speech and music digitized from cassettes, tapes, CDs, etc.
(g) Video (digital): either converted from analogue film or entirely created within a computer.
RDA (Resource Description and Access) is a new standard for describing library resources, designed to replace AACR2. Library staff, including public services, systems personnel, and catalogers, may have heard mention of RDA but not know much about it or how it will change their daily work. You may have many questions. What is RDA? We'll give a very little bit of history and theoretical background. What is this going to mean for catalogers, ILS managers, and users in the near term? What are the future implications, or, why are we doing this? What are the juicy bits of controversy in cataloger-land? And finally, Do we HAVE to? We'll talk for a while, have some activities that get you thinking, and find out your thoughts on RDA.
Presented at "Captains & Crew Collaborating," the 8th annual paraprofessional conference at J.Y. Joyner Library, East Carolina University.
Information repackaging is a process to repackage the analyzed, consolidate information in that form which is more suitable & usable for library users. Customization of information taking into account the needs and characteristics of the individual or user groups and matching them with the information to be provided so that diffusion of information occurs.
Ontology Evaluation: a pitfall-based approach to ontology diagnosisMaría Poveda Villalón
Ontology evaluation, which includes ontology diagnosis and repair, is a complex activity that should be carried out in every ontology development project, because it checks for the technical quality of the ontology. However, there is an important gap between the methodological work about ontology evaluation and the tools that support such an activity. More precisely, not many approaches provide clear guidance about how to diagnose ontologies and how to repair them accordingly.
This thesis aims to advance the current state of the art of ontology evaluation, specifically in the ontology diagnosis activity. The main goals of this thesis are (a) to help ontology engineers to diagnose their ontologies in order to find common pitfalls and (b) to lessen the effort required from them by providing the suitable technological support. This thesis presents the following main contributions:
• A catalogue that describes 41 pitfalls that ontology developers might include in their ontologies.
• A quality model for ontology diagnose that aligns the pitfall catalogue to existing quality models for semantic technologies.
• The design and implementation of 48 methods for detecting 33 out of the 41 pitfalls defined in the catalogue.
• A system called OOPS! (OntOlogy Pitfall Scanner!) that allows ontology engineers to (semi)automatically diagnose their ontologies.
According to the feedback gathered and satisfaction tests carried out, the approach developed and presented in this thesis effectively helps users to increase the quality of their ontologies. At the time of writing this thesis, OOPS! has been broadly accepted by a high number of users worldwide and has been used around 3000 times from 60 different countries. OOPS! is integrated with third-party software and is locally installed in private enterprises being used both for ontology development activities and training courses.
A presentation on Interoperability in Digital Libraries by Rupesh Kumar A, Assistant Professor, Department of Studies and Research in Library and Information Science, Tumkur University, Tumakuru, Karnataka, India.
Sears List of Subject Headings, first published by Minnie Earl Sears in 1923, has served as a standard authority list for subject cataloging in small and medium-sized libraries, delivering a basic list of essential headings, together with patterns and examples to guide the cataloger in creating further headings.
All types of libraries /information centres are organized to provide some basic services which are rendered either in anticipation or on demand from the users. The information services provided in anticipation are termed as alerting services as this alert the users about the new information of their interest. Broadly speaking the same is also termed as current awareness service . The primary aim of any library is to provide timely and quality services to its users
(a) Text: notes, captions, subtitles, contents, indexes.
(b) Data: tables, charts, graphs, spreadsheets.
(c) Graphics: drawings, prints, maps, etc.
(d) Photographic images : negatives, slides, prints .
(e) Animation: including both computer generated, video, etc.
(f) Audio: speech and music digitized from cassettes, tapes, CDs, etc.
(g) Video (digital): either converted from analogue film or entirely created within a computer.
RDA (Resource Description and Access) is a new standard for describing library resources, designed to replace AACR2. Library staff, including public services, systems personnel, and catalogers, may have heard mention of RDA but not know much about it or how it will change their daily work. You may have many questions. What is RDA? We'll give a very little bit of history and theoretical background. What is this going to mean for catalogers, ILS managers, and users in the near term? What are the future implications, or, why are we doing this? What are the juicy bits of controversy in cataloger-land? And finally, Do we HAVE to? We'll talk for a while, have some activities that get you thinking, and find out your thoughts on RDA.
Presented at "Captains & Crew Collaborating," the 8th annual paraprofessional conference at J.Y. Joyner Library, East Carolina University.
Information repackaging is a process to repackage the analyzed, consolidate information in that form which is more suitable & usable for library users. Customization of information taking into account the needs and characteristics of the individual or user groups and matching them with the information to be provided so that diffusion of information occurs.
Ontology Evaluation: a pitfall-based approach to ontology diagnosisMaría Poveda Villalón
Ontology evaluation, which includes ontology diagnosis and repair, is a complex activity that should be carried out in every ontology development project, because it checks for the technical quality of the ontology. However, there is an important gap between the methodological work about ontology evaluation and the tools that support such an activity. More precisely, not many approaches provide clear guidance about how to diagnose ontologies and how to repair them accordingly.
This thesis aims to advance the current state of the art of ontology evaluation, specifically in the ontology diagnosis activity. The main goals of this thesis are (a) to help ontology engineers to diagnose their ontologies in order to find common pitfalls and (b) to lessen the effort required from them by providing the suitable technological support. This thesis presents the following main contributions:
• A catalogue that describes 41 pitfalls that ontology developers might include in their ontologies.
• A quality model for ontology diagnose that aligns the pitfall catalogue to existing quality models for semantic technologies.
• The design and implementation of 48 methods for detecting 33 out of the 41 pitfalls defined in the catalogue.
• A system called OOPS! (OntOlogy Pitfall Scanner!) that allows ontology engineers to (semi)automatically diagnose their ontologies.
According to the feedback gathered and satisfaction tests carried out, the approach developed and presented in this thesis effectively helps users to increase the quality of their ontologies. At the time of writing this thesis, OOPS! has been broadly accepted by a high number of users worldwide and has been used around 3000 times from 60 different countries. OOPS! is integrated with third-party software and is locally installed in private enterprises being used both for ontology development activities and training courses.
Some tools developed at OEG (Ontology Engineering Group) for facilitating ontology engineering activities as evaluation, documentation, releasing and publication.
Articulo
Journal of Computing; vol. 2, no. 5
sers of Institutional Repositories and Digital Libraries are known by their needs for very specific information about one or more subjects. To characterize users profiles and offer them new documents and resources is one of the main challenges of today's libraries. In this paper, a Selective Dissemination of Information service is described, which proposes an Ontology-based Context Aware system for identifying user's context (research subjects, work team, areas of interest). This system enables librarians to broaden users profiles beyond the information that users have introduced by hand (such as institution, age and language). The system requires a context retrieval layer to capture user information and behavior, and an inference engine to support context inference from many information sources (selected documents and users' queries).
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5526
This talk features the basics behind the science of Information Retrieval with a story-mode on information and its various aspects. It then takes you through a quick journey into the process behind building of the search engine.
INTRODUCTION TO INFORMATION RETRIEVAL
This lecture will introduce the information retrieval problem, introduce the terminology related to IR, and provide a history of IR. In particular, the history of the web and its impact on IR will be discussed. Special attention and emphasis will be given to the concept of relevance in IR and the critical role it has played in the development of the subject. The lecture will end with a conceptual explanation of the IR process, and its relationships with other domains as well as current research developments.
INFORMATION RETRIEVAL MODELS
This lecture will present the models that have been used to rank documents according to their estimated relevance to user given queries, where the most relevant documents are shown ahead to those less relevant. Many of these models form the basis for many of the ranking algorithms used in many of past and today’s search applications. The lecture will describe models of IR such as Boolean retrieval, vector space, probabilistic retrieval, language models, and logical models. Relevance feedback, a technique that either implicitly or explicitly modifies user queries in light of their interaction with retrieval results, will also be discussed, as this is particularly relevant to web search and personalization.
= Finding a Good Ontology: The Open Ontology Repository Initiative =
Can you find a good ontology to use or extend for your application?
Building on previous registry and repository efforts, the Open Ontology Repository Initiative is a community effort developing open source software for finding, using, and maintaining open source and other ontologies.
The initial implementation of OOR is based on BioPortal (http://bioportal.bioontology.org), which is used to access and share ontologies that are actively used in biomedical communities and currently supports OWL, OBO, and Protege ontologies, LexGrid and RRF vocabularies, and ontology mapping. BioPortal has been developed by the National Center for Biomedical Ontology with support from the NIH Roadmap, but its infrastructure is domain-independent and being extended in various directions.
This presentation will include the following:
* A demonstration of the current public OOR instance
* OOR requirements and challenges
* On-going and planned development efforts (Common Logic support, federation, gatekeeping, provenance, governance, etc.)
* Details on how you can become involved
OOR Architecture - Towards a Network of Linked Ontology RepositoriesKim Viljanen
(Presented at the OOR Conference Call on Friday 19th November 2010.)
We propose a distributed OOR architecture consisting of simple APIs, ontology repository implementations conforming to these APIs and a registry of these repositories. Together these components create an OOR network that can be used to build services utilizing content from different ontology repositories. The approach is based on an observation that there are different kinds of use cases, ontologies, ontology service providers, etc., and therefore it may not be possible to implement a single OOR server that addresses all possible needs. We suggest that the OOR initiative should focus on APIs and enabling an ecosystem of ontology repositories, not on doing everything by ourselves. Test suites and baseline implementations for APIs are needed for validating API implementations on different ontology repositories and testing the APIs.
Oss and libraries enabling arabic libraries and creating opportunitiesMassoud AlShareef
What is Open Source?
Who is using Open Source?
Open Source Community and Governance
Why should libraries care?
Library Software Overview
Open Source and Library Software today
Open Source and Arabic Libraries today
Why should Arabic libraries care even more?
Arabic Library Software Success Stories
Creating Opportunities: Open Source Software should play a role in driving our National ICT Strategy?
Elsevier is the world's largest publisher of scientific, medical and technical (STM) content. An early adopter of XML as a standard representation for content, Elsevier has used MarkLogic in the development of a range of information access and discovery solutions for its customers. This presentation will cover Elsevier's experience with XML-centric content management systems in general and MarkLogic's technology in specific, describing Elsevier's initial adoption and uptake of the technology, current use within the Elsevier suite of online products and solutions, and opportunities for future use. Design patterns for content repositories within a publishing context that have emerged during our use of the technology will be described, and we will touch on a number of issues that have emerged, including XQuery and its adoption within the developer community, the challenges facing XML from new representations for documents and metadata such as JSON and RDF, and the delivery of search applications based on XML infrastructure.
A talk presented January 20, 2013 in the Indo-US Joint Workshop on Biodiversity Informatics at the Ashoka Trust for Research in Ecology and the Environment in Bangalore, India.
Discovery Systems: Connecting the 21st Century Academic User to ContentAthena Hoeppner
Describes three projects using Discovery to serve academic users: Bibliometric studies of discovery content for graduate and faculty papers; Exposing Open Access content in the Discovery service; Integrating Discovery into the course page editor in a Learning Management System.
Athena Hoeppner. "Discovery Systems: Connecting the 21st Century Academic User to Content." II Seminario Bibliotecas Universitarias del siglo XXI, Bogota, Columbia, 24 March 2015.
Similar to Ontology and Ontology Libraries: a Critical Study (20)
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
2. Outline
• Introduction and Overview
• Languages for expressing Ontologies
• Tools for building Ontologies
• Ontology Libraries
• Evaluation criteria
• Transitions to the future
3. What is Ontology?
• The term "ontology" can be defined as
an explicit specification of conceptualization.
1. Ontology is a term in philosophy and its meaning is ``theory
of existence''.
2. Ontology is an explicit specification of conceptualization.
3. Ontology is a body of knowledge describing some domain.
4. Ontology-Definition
Ontologies to capture human knowledge based on common sense.
- Lenat y Guha (1990)
Source: http://www.emiliosanfilippo.it/wp-content/uploads/2011/11/Ontology-3.jpg
5. How are Ontologies currently being used to Librarians?
• Standardize vocabulary.
• Provide better routes.
• Provide better search.
Source: http://gonereading.com/newshop/wpcontent/uploads/2012/01/Librarian-Gift-446x446.png
7. Tools for Building ontology
• The are many Ontology tools are available in the
present times such as Protégé, OntoEdit, Ontolingua,
OilEd, pOWL etc.
Source: http://www.colourbox.com/preview/2456889-245207-three-3d-people-with-the-tools-in-the-hands-of.jpg
8. Protégé
• Free, open source
• Web client and format
• Strong community
Source: http://www.racer-systems.com/img//logo/protege.gif
9. Ontology built using Protégé
Source: http://protege-ontology-editor-knowledge-acquisition-system.136.n4.nabble.com/attachment/4658809/2/jbddhbbc.png
10. What is Ontology Libraries (OL)?
Ontology libraries are the systems that collect ontologies from
different sources and facilitate the tasks of finding, exploring, and
using these ontologies.
Source: http://semanticweb.com/files/2013/09/9685321345_afc5296f95.jpg
11. Need for ontology libraries
• Enables and facilitates interoperability
• Well-established and well-tested ontologies
• Integrates the data much more easily
(Contd.)
12. Need for ontology libraries
• Find and determine the domain
• Evaluate the quality
• Ontology in specific format
• Publish their ontology
15. Library content
What is in it?
-Ontology and how they are collected
-Gatekeeping
-Mappings and other inter-ontology relations
-Metadata
16. Main function for users
What does it let you do?
-Finding, Search and evaluating ontologies
-Browsing
-Programmatic access
17. Other features
What else is there?
-Versioning
-Reasoning
-User management
-Notifications
18. What OL offers!!
Ontology library systems offer functions for
managing, adapting and standardizing groups of
ontologies, for indexing content with ontologies, and
for utilizing ontologies in applications.
Source: http://assets.fiercemarkets.com/public/newsletter/fiercehealthit/telehealth4.jpg
19. Requirements for an Ontology Library Service
• Designing the ontologies
• Populating the ontologies
• Publishing the ontologies
(Contd.)
20. Requirements for an Ontology Library Service
• Ontology based semantic application
• Ontology based semantic content creation
• Ontology based end user application
21. Structure of Ontology Library Systems
Source: http://origin-ars.els-cdn.com/content/image/1-s2.0-S0169023X02000411-gr4.jpg
31. oeGov- e-Government
• Distributed creation and maintenance of information
• RDF/OWL formats
• Semantics and controlled vocabularies
• Schemas and several datasets
• Blog system and review
Source: http://oegov.org/images/oegov_logo.jpg
37. Evaluation Criteria
1. Domain
2. Number of ontologies
3. Dynamics
4. Search metadata
5. Search within ontology
6. Browsing ontologies
7. Architecture
8. Components
9. Collection
10. Gatekeeping
11. Search across ontologies
12. Metrics
13. Comments and reviews
14.Ranking
15. Navigation criteria
16. SPARQL endpoint
17. Content available
18. Read or write
19. Intended use
20. Storage
21. Web service access
22. Accepted formats
38. OL
Features
BioPortal Cupboard OBO
Foundry
oeGov OLS ODP Schema-
Cache
Domain Biomedical General Biomedical e-Govt. Biomedical General General
No. of
ontologies
270 150 86 31 79 125 157
Dynamics Growing Growing Stable Growing Stable Growing Stable
Search
metadata
Yes Yes Yes Blog-based No Wiki-based No
Search
within
ontology
Yes, with
autocomple
te
Advanced
search
No No Yes, terms
and terms
IDs
No Keyword-
based
Browsing
ontologies
Yes Yes No No Yes No Yes
Compone
nts
Protégé,
LexGrid
Watson Sourceforge Wordpress OBO API MediaWiki Talis
platform
Architectu
re
Single
server
REST-based
communicat
ion
CVS-based Single
server
Single
server
Single
server
Cloud-
based
39.
40. Transitions to the future
Challenges and opportunities for an ontology
developer
-Role of an ontology libraries in massive adoption
and reuse.
-Community service as provided by ontology libraies
through appropriate endorsements.
(contd.)
41. Transitions to the future
Challenges and opportunities for an ontology
user
- Important for ontologies to be validated by a given
community
- An Ontologies be considered to one particular
domain, one particular format
42. References
1. Mathieu, d’Aquin, F. Noy, Natalya (2012). Where to publish and find
ontologies? A survey of ontology libraries. Web Semantics: Science,
Services and Agents on the World Wide Web, 11, 96–111
2. http://obitko.com/tutorials/ontologies-semantic-web/ontologies.html
3. Curras, E (2010), Ontologies, Taxonomies and Thesauri in system science
and systematics. Great Abinton, CB: Woodhead Publishing.
4. King, Brandy . E , Reinold, Kathy (2008). Finding the concept, not just
the word: a librarian’s guide to ontologies and semantics. Witney, OX:
Chandos Publishing.
5. Bechhofer, S. Goble, C and Horrocks, I (2002). Requirements of
Ontology Languages. IST Project IST-2000-29243 OntoWeb.
6. http://www.dur.ac.uk/p.h.shaw/teaching/ais/lectures/patricia/ais13-rdfs.pdf
43. References (Contd.)
7. Heflin, J , an introduction to the owl web ontology language
8. http://protege.stanford.edu/
9. http://en.wikipedia.org/wiki/Ontology_Libraries_(computer_science)
10.http://protege.stanford.edu/publications/ontology_development/ontolog
y101-noy-mcguinness.html
11. Y. Ding, D. Fensel, Ontology library systems. The key to successful
ontology reuse, in: First Semantic Web Working Symposium, Stanford
University, 2001, pp. 93–112.
12. http://rpc295.cs.man.ac.uk:8080/repository/
13. http://semanticweb.com/oegov-open-government-through-semantic-
web-technologies_b13990
14. http://oegov.org/