The document summarizes a presentation on implementing OpenURL version 1.0. Key points include:
- OpenURL 1.0 expands on version 0.1 by allowing richer metadata, new genres, extensibility through formatting and registering new elements.
- It separates the ContextObject, which describes a referenced item and its context, from its transport via HTTP. ContextObjects can be passed by value or reference.
- The San Antonio Profile provides guidelines for compliant implementation, including recommended formats, entities, and transports.
- Creating OpenURL links involves specifying the resolver URL, referrer, referent identifiers, and optional metadata in a key-value format.
Matching and merging anonymous terms from web sourcesIJwest
This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated i
The document discusses fuzzy type-ahead search techniques for XML data. It describes how traditional XML query techniques like XPath and XQuery can be complex for users. It then discusses fuzzy search methods like the minimum cost tree approach and LCA-based interactive search that allow for approximate keyword matching. The paper also proposes using exclusive LCA and effective indexing and ranking algorithms to efficiently identify the top-k most relevant answers to a fuzzy keyword query in an XML document.
JLIFF, Creating a JSON Serialization of OASIS XLIFFDavid Filip
The document discusses the creation of JLIFF, a JSON serialization of the OASIS XLIFF standard. It provides an overview of the key aspects of the JLIFF design, which aims to map the XML-based XLIFF standard to JSON in a way that avoids XMLism and takes into account the structural differences between XML and JSON. Some of the high-level points covered include avoiding mixing of character data and markup in JSON, using arrays to represent sequences, mapping XML namespaces to JSON-LD contexts, and representing modules as part of the core JLIFF specification rather than as extensions. The document also discusses other considerations like data types and handling of extensions.
This document discusses renal function tests and their importance in assessing kidney function and detecting impairment. It describes various tests including urine analysis, blood tests of creatinine and urea, and glomerular function tests. Common indications for evaluating renal function are listed, such as older age, diabetes, and hypertension. The document also outlines approaches to interpreting test results and diagnosing different kidney conditions like acute injury, nephritic syndrome, and nephrotic syndrome.
This document discusses renal function tests (RFTs). It begins by describing the functions of the kidney including formation of urine, excretion of waste products, and regulation of water, electrolytes and acid-base balance.
It then explains that RFTs are used to assess renal damage, monitor progression of renal disease, and adjust dosing of nephrotoxic drugs. RFTs provide information on renal blood flow, glomerular filtration rate, tubular function, and urine output. Tests include urine analysis, measurements of glomerular function like creatinine clearance, and tests of tubular function like concentration and dilution tests. The document describes several RFTs in detail.
This document discusses various renal function tests used to evaluate different aspects of kidney function. It describes tests of glomerular filtration rate (GFR) including clearance tests using substances like creatinine, inulin, and radioactive tracers. It also discusses tubular function tests like urine concentration tests, osmolarity measurements, and tests of the kidney's response to vasopressin. Formulas for calculating clearance, osmolarity, and free water clearance are provided. The significance of GFR measurements and estimated GFR formulas like Cockcroft-Gault and MDRD are summarized.
This document discusses the key functions and mechanisms of the kidneys. The kidneys are responsible for regulating water, electrolyte and acid-base balance, and excreting metabolic waste products like urea and creatinine. They also retain substances vital to the body like glucose and amino acids. The kidneys function as endocrine organs by producing hormones like erythropoietin and calcitriol. The nephron is the functional unit of the kidney, and glomerular filtration and tubular reabsorption are the key processes in urine formation. Various tests are used to assess kidney function, including clearance tests using creatinine and urea, as well as examining the blood, urine and using thresholds.
The kidneys contain approximately 1 million nephrons each. Nephrons are the functional units of the kidney and consist of glomeruli and tubules. Nephron formation is complete by birth but maturation continues into childhood. A decreased number of nephrons can lead to renal disease later in life. Evaluation of renal function includes urine analysis, measurement of glomerular filtration rate (GFR) using creatinine clearance or formulas, and tests of urinary concentration and acidification abilities.
Matching and merging anonymous terms from web sourcesIJwest
This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching spec This paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching specThis paper describes a workflow of simplifying and matching spec ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated ial language terms in RDF generated i
The document discusses fuzzy type-ahead search techniques for XML data. It describes how traditional XML query techniques like XPath and XQuery can be complex for users. It then discusses fuzzy search methods like the minimum cost tree approach and LCA-based interactive search that allow for approximate keyword matching. The paper also proposes using exclusive LCA and effective indexing and ranking algorithms to efficiently identify the top-k most relevant answers to a fuzzy keyword query in an XML document.
JLIFF, Creating a JSON Serialization of OASIS XLIFFDavid Filip
The document discusses the creation of JLIFF, a JSON serialization of the OASIS XLIFF standard. It provides an overview of the key aspects of the JLIFF design, which aims to map the XML-based XLIFF standard to JSON in a way that avoids XMLism and takes into account the structural differences between XML and JSON. Some of the high-level points covered include avoiding mixing of character data and markup in JSON, using arrays to represent sequences, mapping XML namespaces to JSON-LD contexts, and representing modules as part of the core JLIFF specification rather than as extensions. The document also discusses other considerations like data types and handling of extensions.
This document discusses renal function tests and their importance in assessing kidney function and detecting impairment. It describes various tests including urine analysis, blood tests of creatinine and urea, and glomerular function tests. Common indications for evaluating renal function are listed, such as older age, diabetes, and hypertension. The document also outlines approaches to interpreting test results and diagnosing different kidney conditions like acute injury, nephritic syndrome, and nephrotic syndrome.
This document discusses renal function tests (RFTs). It begins by describing the functions of the kidney including formation of urine, excretion of waste products, and regulation of water, electrolytes and acid-base balance.
It then explains that RFTs are used to assess renal damage, monitor progression of renal disease, and adjust dosing of nephrotoxic drugs. RFTs provide information on renal blood flow, glomerular filtration rate, tubular function, and urine output. Tests include urine analysis, measurements of glomerular function like creatinine clearance, and tests of tubular function like concentration and dilution tests. The document describes several RFTs in detail.
This document discusses various renal function tests used to evaluate different aspects of kidney function. It describes tests of glomerular filtration rate (GFR) including clearance tests using substances like creatinine, inulin, and radioactive tracers. It also discusses tubular function tests like urine concentration tests, osmolarity measurements, and tests of the kidney's response to vasopressin. Formulas for calculating clearance, osmolarity, and free water clearance are provided. The significance of GFR measurements and estimated GFR formulas like Cockcroft-Gault and MDRD are summarized.
This document discusses the key functions and mechanisms of the kidneys. The kidneys are responsible for regulating water, electrolyte and acid-base balance, and excreting metabolic waste products like urea and creatinine. They also retain substances vital to the body like glucose and amino acids. The kidneys function as endocrine organs by producing hormones like erythropoietin and calcitriol. The nephron is the functional unit of the kidney, and glomerular filtration and tubular reabsorption are the key processes in urine formation. Various tests are used to assess kidney function, including clearance tests using creatinine and urea, as well as examining the blood, urine and using thresholds.
The kidneys contain approximately 1 million nephrons each. Nephrons are the functional units of the kidney and consist of glomeruli and tubules. Nephron formation is complete by birth but maturation continues into childhood. A decreased number of nephrons can lead to renal disease later in life. Evaluation of renal function includes urine analysis, measurement of glomerular filtration rate (GFR) using creatinine clearance or formulas, and tests of urinary concentration and acidification abilities.
SKOS - 2007 Open Forum on Metadata Registries - NYCjonphipps
An brief introduction to SKOS (Simple Knowledge Organization Systems) and its usage in the NSDL Metadata Registry, with some discussion of current challenges.
Exchange of usage metadata in a network of institutional repositories: the ...Benoit Pauwels
The document discusses the exchange of usage metadata between institutional repositories in a network called Economists Online (EO). It proposes using Scholarly Works Usage Profiles (SWUP) based on the OpenURL ContextObject framework to normalize usage data from different sources. SWUP maps log file information like downloads to standardized identifiers for items, users, services. This allows aggregated usage analysis and ranking of popular publications across the EO network.
Exchange of usage metadata in a network of institutional repositories: the ca...ULB - Bibliothèques
The document discusses the exchange of usage metadata between institutional repositories in a network called Economists Online (EO). It proposes using Scholarly Works Usage Profiles (SWUP) based on the OpenURL ContextObject framework to normalize usage data from different sources. SWUP maps log file information like downloads to standardized identifiers for items, users, services. This allows aggregated usage analysis and ranking of popular publications across the EO network.
This document discusses annotation services provided by Brown University Library for annotating digital texts. It describes several digital humanities projects at Brown that involve annotation. It then explains how the library uses AtomPub and RDF to publish annotations on the web as Linked Open Data with metadata and links back to the annotated sources. Users can annotate portions of documents and their annotations will be ingested into the repository and syndicated as Atom feeds that others can subscribe to.
Structured Dynamics provides 'ontology-driven applications'. Our product stack is geared to enable the semantic enterprise. The products are premised on preserving and leveraging existing information assets in an incremental, low-risk way. SD's products span from converters to authoring environments to Web services middleware and to eventual ontologies and user interfaces and applications.
Annotating Digital Texts in the Brown University LibraryTimothy Cole
The document discusses annotating digital texts at Brown University Library. It describes several projects at Brown that involve textual scholarship and digital humanities. It then explains the Pico Project, which aims to annotate Giovanni Pico della Mirandola's 900 Theses. It outlines how annotations of digital objects are ingested and stored in the Brown Digital Repository using AtomPub, XML, RDF, and Linked Data standards to allow for aggregation, syndication, and addressing of annotations.
This document provides an introduction to Lucene, an open-source information retrieval library. It discusses Lucene's components and architecture, how it models content and performs indexing and searching. It also summarizes how to build search applications using Lucene, including acquiring content, building documents, analyzing text, indexing documents, and querying. Finally, it discusses frameworks that are built on Lucene like Compass and Solr.
Information for learning object exchangeDavid Massart
The document discusses Information for Learning Object eXchange (ILOX), which uses the Functional Requirements for Bibliographic Records (FRBR) conceptual model to describe learning objects at different levels of abstraction (Work, Expression, Manifestation, Item). It provides examples of metadata that could be used at each FRBR level and discusses best practices for selecting the appropriate level and metadata when describing learning objects in different contexts like harvesting, searching, and publishing. ILOX is compared to the Open Archives Initiative Object Reuse and Exchange (OAI-ORE) standard, noting they are orthogonal and can be combined.
Elasticsearch is a distributed, RESTful, free and open source search engine based on Apache Lucene. It allows for fast full text searches across large volumes of data. Documents are indexed in Elasticsearch to build an inverted index that allows for fast keyword searches. The index maps words or numbers to their locations in documents for fast retrieval. Elasticsearch uses Apache Lucene to create and manage the inverted index.
Toward Semantic Representation of Science in Electronic Laboratory Notebooks ...Stuart Chalk
An electronic laboratory Notebook (ELN) can be characterized as a system that allows scientists to capture the data and resources used in performing scientific experiments. This allows users to easily organize and find their data however, little information about the scientific process is recorded.
In this paper we highlight the current status of progress toward semantic representation of science in ELNs.
Database systems that were based on the object data model were known originally as object-oriented databases (OODBs).These are mainly used for complex objects
This document discusses several potential applications of the Open Archives Initiative's Object Reuse and Exchange (ORE) standard based on the Dutch repository infrastructure. It describes using ORE to create compound digital objects that aggregate publications with their related datasets. This allows establishing detailed relationships between publications and datasets. It also outlines a demonstrator project called SamenInDelen that aims to create six enhanced publications combining publications with different types of research data, like survey data and experimental economics data. The demonstrator will evaluate the Data Documentation Initiative (DDI) standard and create a sustainable environment for these enhanced publications.
Oracle Text is a search technology built into Oracle Database that allows full-text searches of both structured and unstructured data. It provides features like Boolean search, stemming, thesaurus, and result ranking. The Oracle Text indexing process transforms documents into plain text, identifies sections, splits text into words or tokens, and builds an index mapping keywords to documents. Developers can customize the indexing process by defining their own data sources, filters, sectioners, and lexers.
The slides show what is linked data and how we experiment with linked data in the area of legislative documents (in Czech Republic).
Download the slides for detailed embedded comments.
The document discusses perspectives on metadata from web resources and database systems. It describes how metadata comes in many forms and serves various purposes, such as supporting discovery and identification of information resources on the web (resource metadata), and ensuring consistency and analysis of structured data in databases (metadata in database systems). Resource metadata commonly follows standards and is stored separately from the resources it describes, while database metadata includes both structural metadata describing data organization and content metadata in the form of data dictionaries.
Dublin Core Application Profile for Scholarly Works SlainteJulie Allinson
UKOLN developed an application profile and metadata model for scholarly works in institutional repositories that is based on FRBR (Functional Requirements for Bibliographic Records) and Dublin Core. The model defines entities for scholarly works, expressions, manifestations, and copies. Properties were added to Dublin Core to describe the relationships between these entities. Next steps include getting community acceptance and deploying the application profile.
The document describes the SFX framework for context-sensitive reference linking, which allows a user accessing a citation to be redirected to an appropriate full text or service based on their context. The framework uses an OpenURL standard to pass citation metadata from a link source to a parsing server, which then sends the metadata to a linking server to determine the most relevant services and create dynamic links to them based on the user's access and the available library collections and resources. The goal is to provide context-sensitive services to users based on their access and the cited item metadata rather than relying on pre-computed static links.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
The document discusses the Open Archive Initiative Protocol for Metadata Harvesting (OAI-PMH). It describes OAI-PMH as a standard that allows data providers to make metadata available via HTTP so that service providers can harvest the metadata to develop value-added services. It provides details on the various requests and operations that are part of the OAI-PMH protocol. The document also discusses some implementation issues and examples of service providers that utilize OAI-PMH harvested metadata.
Metadata for Terminology / KOS ResourcesMarcia Zeng
1. Why do we need metadata for terminology resources? 2. What do we need to know about a terminology resource? 3. Is there a standardized set of metadata elements for terminology resources?-- a presentation at the "New Dimensions in Knowledge Organization Systems", a Joint NKOS/ CENDI Workshop, World Bank, Washington, DC. September 11, 2008 http://nkos.slis.kent.edu/2008workshop/NKOS-CENDI2008.htm
This document provides guidance for vendors responding to a request for proposal (RFP). It outlines the key steps, which include reading the RFP thoroughly, establishing win themes in an internal kickoff meeting, collecting questions, framing the response, ensuring proper grammar, conducting an internal review, submitting before the deadline, preparing for presentations as an assembled team with rehearsal, taking nothing for granted by being overly prepared, negotiating if selected, celebrating the outcome, and conducting a post-mortem review.
The document discusses the request for proposal (RFP) process. It defines an RFP as an invitation for vendors to submit proposals to provide goods or services to an organization. The document outlines the key steps in the RFP process, including assessing needs, preparing and distributing the RFP, evaluating proposals, conducting presentations, and negotiating contracts. It provides guidance on elements to include in an RFP, questions to ask vendors, tips for evaluating proposals and presentations, and best practices for negotiations.
SKOS - 2007 Open Forum on Metadata Registries - NYCjonphipps
An brief introduction to SKOS (Simple Knowledge Organization Systems) and its usage in the NSDL Metadata Registry, with some discussion of current challenges.
Exchange of usage metadata in a network of institutional repositories: the ...Benoit Pauwels
The document discusses the exchange of usage metadata between institutional repositories in a network called Economists Online (EO). It proposes using Scholarly Works Usage Profiles (SWUP) based on the OpenURL ContextObject framework to normalize usage data from different sources. SWUP maps log file information like downloads to standardized identifiers for items, users, services. This allows aggregated usage analysis and ranking of popular publications across the EO network.
Exchange of usage metadata in a network of institutional repositories: the ca...ULB - Bibliothèques
The document discusses the exchange of usage metadata between institutional repositories in a network called Economists Online (EO). It proposes using Scholarly Works Usage Profiles (SWUP) based on the OpenURL ContextObject framework to normalize usage data from different sources. SWUP maps log file information like downloads to standardized identifiers for items, users, services. This allows aggregated usage analysis and ranking of popular publications across the EO network.
This document discusses annotation services provided by Brown University Library for annotating digital texts. It describes several digital humanities projects at Brown that involve annotation. It then explains how the library uses AtomPub and RDF to publish annotations on the web as Linked Open Data with metadata and links back to the annotated sources. Users can annotate portions of documents and their annotations will be ingested into the repository and syndicated as Atom feeds that others can subscribe to.
Structured Dynamics provides 'ontology-driven applications'. Our product stack is geared to enable the semantic enterprise. The products are premised on preserving and leveraging existing information assets in an incremental, low-risk way. SD's products span from converters to authoring environments to Web services middleware and to eventual ontologies and user interfaces and applications.
Annotating Digital Texts in the Brown University LibraryTimothy Cole
The document discusses annotating digital texts at Brown University Library. It describes several projects at Brown that involve textual scholarship and digital humanities. It then explains the Pico Project, which aims to annotate Giovanni Pico della Mirandola's 900 Theses. It outlines how annotations of digital objects are ingested and stored in the Brown Digital Repository using AtomPub, XML, RDF, and Linked Data standards to allow for aggregation, syndication, and addressing of annotations.
This document provides an introduction to Lucene, an open-source information retrieval library. It discusses Lucene's components and architecture, how it models content and performs indexing and searching. It also summarizes how to build search applications using Lucene, including acquiring content, building documents, analyzing text, indexing documents, and querying. Finally, it discusses frameworks that are built on Lucene like Compass and Solr.
Information for learning object exchangeDavid Massart
The document discusses Information for Learning Object eXchange (ILOX), which uses the Functional Requirements for Bibliographic Records (FRBR) conceptual model to describe learning objects at different levels of abstraction (Work, Expression, Manifestation, Item). It provides examples of metadata that could be used at each FRBR level and discusses best practices for selecting the appropriate level and metadata when describing learning objects in different contexts like harvesting, searching, and publishing. ILOX is compared to the Open Archives Initiative Object Reuse and Exchange (OAI-ORE) standard, noting they are orthogonal and can be combined.
Elasticsearch is a distributed, RESTful, free and open source search engine based on Apache Lucene. It allows for fast full text searches across large volumes of data. Documents are indexed in Elasticsearch to build an inverted index that allows for fast keyword searches. The index maps words or numbers to their locations in documents for fast retrieval. Elasticsearch uses Apache Lucene to create and manage the inverted index.
Toward Semantic Representation of Science in Electronic Laboratory Notebooks ...Stuart Chalk
An electronic laboratory Notebook (ELN) can be characterized as a system that allows scientists to capture the data and resources used in performing scientific experiments. This allows users to easily organize and find their data however, little information about the scientific process is recorded.
In this paper we highlight the current status of progress toward semantic representation of science in ELNs.
Database systems that were based on the object data model were known originally as object-oriented databases (OODBs).These are mainly used for complex objects
This document discusses several potential applications of the Open Archives Initiative's Object Reuse and Exchange (ORE) standard based on the Dutch repository infrastructure. It describes using ORE to create compound digital objects that aggregate publications with their related datasets. This allows establishing detailed relationships between publications and datasets. It also outlines a demonstrator project called SamenInDelen that aims to create six enhanced publications combining publications with different types of research data, like survey data and experimental economics data. The demonstrator will evaluate the Data Documentation Initiative (DDI) standard and create a sustainable environment for these enhanced publications.
Oracle Text is a search technology built into Oracle Database that allows full-text searches of both structured and unstructured data. It provides features like Boolean search, stemming, thesaurus, and result ranking. The Oracle Text indexing process transforms documents into plain text, identifies sections, splits text into words or tokens, and builds an index mapping keywords to documents. Developers can customize the indexing process by defining their own data sources, filters, sectioners, and lexers.
The slides show what is linked data and how we experiment with linked data in the area of legislative documents (in Czech Republic).
Download the slides for detailed embedded comments.
The document discusses perspectives on metadata from web resources and database systems. It describes how metadata comes in many forms and serves various purposes, such as supporting discovery and identification of information resources on the web (resource metadata), and ensuring consistency and analysis of structured data in databases (metadata in database systems). Resource metadata commonly follows standards and is stored separately from the resources it describes, while database metadata includes both structural metadata describing data organization and content metadata in the form of data dictionaries.
Dublin Core Application Profile for Scholarly Works SlainteJulie Allinson
UKOLN developed an application profile and metadata model for scholarly works in institutional repositories that is based on FRBR (Functional Requirements for Bibliographic Records) and Dublin Core. The model defines entities for scholarly works, expressions, manifestations, and copies. Properties were added to Dublin Core to describe the relationships between these entities. Next steps include getting community acceptance and deploying the application profile.
The document describes the SFX framework for context-sensitive reference linking, which allows a user accessing a citation to be redirected to an appropriate full text or service based on their context. The framework uses an OpenURL standard to pass citation metadata from a link source to a parsing server, which then sends the metadata to a linking server to determine the most relevant services and create dynamic links to them based on the user's access and the available library collections and resources. The goal is to provide context-sensitive services to users based on their access and the cited item metadata rather than relying on pre-computed static links.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
The document discusses the Open Archive Initiative Protocol for Metadata Harvesting (OAI-PMH). It describes OAI-PMH as a standard that allows data providers to make metadata available via HTTP so that service providers can harvest the metadata to develop value-added services. It provides details on the various requests and operations that are part of the OAI-PMH protocol. The document also discusses some implementation issues and examples of service providers that utilize OAI-PMH harvested metadata.
Metadata for Terminology / KOS ResourcesMarcia Zeng
1. Why do we need metadata for terminology resources? 2. What do we need to know about a terminology resource? 3. Is there a standardized set of metadata elements for terminology resources?-- a presentation at the "New Dimensions in Knowledge Organization Systems", a Joint NKOS/ CENDI Workshop, World Bank, Washington, DC. September 11, 2008 http://nkos.slis.kent.edu/2008workshop/NKOS-CENDI2008.htm
This document provides guidance for vendors responding to a request for proposal (RFP). It outlines the key steps, which include reading the RFP thoroughly, establishing win themes in an internal kickoff meeting, collecting questions, framing the response, ensuring proper grammar, conducting an internal review, submitting before the deadline, preparing for presentations as an assembled team with rehearsal, taking nothing for granted by being overly prepared, negotiating if selected, celebrating the outcome, and conducting a post-mortem review.
The document discusses the request for proposal (RFP) process. It defines an RFP as an invitation for vendors to submit proposals to provide goods or services to an organization. The document outlines the key steps in the RFP process, including assessing needs, preparing and distributing the RFP, evaluating proposals, conducting presentations, and negotiating contracts. It provides guidance on elements to include in an RFP, questions to ask vendors, tips for evaluating proposals and presentations, and best practices for negotiations.
This document discusses the RFP (Request for Proposal) process. It begins by outlining when an RFP may be needed, such as when a contract is up for renewal or there are issues with the current vendor. It then discusses selecting a consultant to manage the RFP process if desired. The document outlines the consultant's role in defining needs, identifying vendors, developing the RFP, managing communications and evaluations. Key aspects of the RFP are described like requirements, expectations and allowing vendor questions. The proposal, demo and contract phases are also summarized. The goal is to have a smooth transition to the new vendor selected through this competitive process.
This document provides guidance on executing a successful RFP (request for proposal) process. It begins by outlining when an RFP is the right tool and when it may not be suitable. When scope is unclear or requirements are not well defined, a project charter can help determine the best path forward. The document emphasizes treating the RFP as a process, not just a document, with clear communication and sufficient time allotted. It also provides tips on prioritizing requirements, evaluating differentiators between vendors, negotiating contracts, and determining when to engage a consultant.
This document summarizes a seminar on networking for career development. The speaker has over 24 years of experience in strategy, sales, legal, and business development. They will discuss their experiences as a mentee, peer, and mentor. Networking is defined as developing business opportunities through referrals and introductions in person or online to build enduring relationships. The speaker will discuss why networking and mentoring are important for meeting people in your field, learning industry dynamics, and finding new opportunities. They will provide tips on how to network strategically including starting with goals, focusing on personal connections, using professional societies and social networks, and maintaining a long-term perspective. Contact details are provided for anyone seeking mentoring advice.
Elizabeth Demers is a senior acquisitions editor at Johns Hopkins University Press with 20 years of experience in academic and trade publishing. She signs 20-30 books per year, including monographs, trade titles, and course adoption books. She commissions new books, evaluates submitted manuscripts, provides developmental edits, and attends conferences to promote books and the press. Her talk discusses strategies for networking to build professional connections in two areas: building her book list through conferences, outreach, and social media; and finding future career opportunities by getting involved in the industry and being generous with her time and recommendations.
Angela Cochran is a director, mother, wife, daughter, and volunteer leader who advocates for networking through volunteering and active participation. She recommends getting involved in committees and leadership roles to meet people, learn negotiation and collaboration skills, and gain experience in governance. Cochran also suggests attending professional events to ask questions, start conversations, exchange business cards, contribute online, and speak up so others realize your knowledge and potential to contribute.
Digital Science's mission is to fuel scientific discovery with software that simplifies research. They aim to empower researchers with disruptive technology. They incubate and invest in startups in the research field, with the goal of making research simpler so researchers have more time for discovery. Digital Science is a technology company that serves the needs of scientific research by changing the way science works.
The document discusses diversity and inclusion in mentorship at the American Society of Civil Engineers (ASCE). It describes the ASCE Diversity & Inclusion Council established in 2014 with a mission to foster understanding and cultivate an inclusive workforce. The council has 13 members from different departments, designations, races, ethnicities, and genders. It also works with a separate committee for ASCE's over 150,000 members from 177 countries. Activities to promote diversity include highlighting heritage months, lunch-and-learn sessions on topics like disability etiquette and working styles, and inviting outside speakers on bias. Mentorship can be formal or informal and aims to bridge gaps in skills, self-awareness, and confidence through
The Mentorship Program at T&F was created in 2010 based on employee feedback requesting guidance and support from experienced employees. The program is informal with 1:1 mentoring relationships lasting 6-12 months between employees in different divisions. Over 70 matches have been made in 5 years with only 2 not working out. Benefits include 20% of participants being promoted, 10% transferring, and under 5% turnover. The program increased employee engagement and led to improved productivity and cost savings.
This document discusses mentoring at the American Society of Civil Engineers (ASCE). It provides details about the pilot mentoring program launched in 2014 and the full program launched in 2015. Key points include pairing mentees and mentors, providing training and guidelines, and collecting feedback. The program aimed to facilitate a culture shift at ASCE to emphasize core values like trust, teamwork and excellence. Lessons learned include ensuring mentors and mentees are a good match and maintaining expectations. The author provides their own experience being paired as a mentor and mentee.
The document discusses advice and mentorship. It presents a series of fictional scenarios where a person seeks advice at different career stages and receives both helpful and unhelpful advice. It then provides recommendations for finding mentors and making the most of advice received, such as looking across different fields, mentoring others, and remembering that not all advice should be followed. The overall message is that while advice can be good or bad, it is still useful to consider different perspectives to help advance one's career.
October Ivins has worked in various library and information science roles since 1985, including positions at UNC Chapel Hill Library, LSU Baton Rouge Library, and UT Austin. She has been involved with professional organizations like ALA, NASIG, and SSP since 1981. As an independent consultant since 2001, Ivins mentors others on career development topics such as getting the most out of conferences, choosing positions, supervisor and coworker issues, and professional associations. Her document provides advice on training opportunities, managing staff, getting referrals, and preparing for phone interviews.
Early in one's career, a formal mentor is not necessary as support can be found from observing mid-to-late career colleagues. Peer mentoring through collaboration with other managers, especially other women managers, can also be effective. As careers advance, having a women mentor becomes important as women face unique challenges in the workplace and mentors help other women navigate their careers. Without any mentor, one risks lacking career advice, feeling stagnant in their career progression, and experiencing periods of career confusion with no expert to provide guidance.
Adrian Stanley discussed his experience mentoring fellows through the SSP program. He explained that mentoring involves softer guidance to help mentees develop over the long term through balanced listening, directing, and connecting. Fellows benefit from the experience and connections of mentors, who can help open doors, share new perspectives, and make introductions to expand networks and opportunities in the industry. Feedback from fellows showed mentoring helped them learn from experience, feel more included and secure asking questions, and broaden their industry perspectives.
The document discusses two kinds of mentorship at the nonprofit organization BioOne. It provides an overview of BioOne's mission to make scientific research more accessible and its founding by both library and publisher interests. It then defines a "culture of mentorship" as a work environment where employees feel comfortable getting advice from supervisors and colleagues, who see them as whole people rather than just skills. The second kind of mentorship is described as a more traditional unofficial mentor who provides professional guidance. It concludes by listing the executive staff of BioOne and contact information for the speaker.
This document provides a summary of October Ivins' career experience and areas of expertise. It lists her educational background, including degrees from UNC Chapel Hill Library in 1974-1985, UNC Chapel Hill SILS in 1985-1987, and LSU Baton Rouge Library in 1987-1995. It also outlines her work experience at UT Austin SILS from 1995-1998, Publist.com from 1998-2000, Booktech.com from 2000-2001, and as an independent consultant from 2001-present. The document then discusses how her definition of an information professional has loosened over time to include various managerial roles. It concludes by listing topics she provides career coaching and mentoring on, such as choosing jobs
Mohammad H Asadi Lari presented on creating an office culture of mentorship from the perspective of an early career student and mentee. He discussed his experiences being mentored through the SSP Fellowship program and beyond. Emerging trends in early career mentorship include more organizations introducing formal mentorship opportunities and an increase in both professional and peer mentoring models. Mentorship provides visible benefits like networking and career development, as well as hidden benefits beyond initial programs.
This document discusses opportunities for Western academic publishers in China. It notes that China is a rapidly growing market with increasing research output and funding. However, it is also highly competitive. The document outlines several strategies publishers can consider to engage with the Chinese market, including developing local language materials, using social media platforms allowed in China, attending Chinese conferences, exploring co-publishing opportunities with Chinese partners, and developing a long-term strategic plan focused on impact and relationships within China. It also discusses China's increasing open access policies and investments in research universities that could affect publishing opportunities.
This document discusses JSTOR's growing participation in Turkey from 1999-2014. It shows that participation grew slowly at first but increased significantly after the Turkish government began funding access to JSTOR collections through the Anatolian University Libraries Consortium in 2005. Participation and number of collections licensed continued to grow steadily through partnerships with the consortium and engaging a licensing agent in 2013. While agents can help with local representation, awareness, and relationships, they also present challenges of managing expectations, competing demands, and individuals not reporting to JSTOR.
2. Registry
Referrer
Referring Entity By Reference
Schema
Namespace POST
UTF-8 KEV
ContextObject
Profiles
OpenURL 1.0
OpenURL 1.0
Service types
Referent URI
Metadata
HTTPS
ORI formats
By Value Resolver
Identifiers
URI XML
GET
HTTP
Requester Encoding
3. Overview
Why OpenURL 1.0
Definitions
ContextObject and related “entities”
San Antonio Profile
Becoming a compliant source
4. OpenURL 0.1 example
http://example.org/myResolver? BaseURL
sid=myid:mydb
&id=doi:10.1126/science.275.5304.1320 Source
&id=pmid:9036860
&genre=article
&atitle=Isolation of a common receptor
for coxsackie B Item
&title=Science
&aulast=Bergelson
&auinit=J
&date=1997
&volume=275
&spage=1320
&epage=1323
5. Why OpenURL 1.0
Represent new genres
Richer metadata options
Version control
More complete context description
Ability to send request “by reference”
Extensible
6. New genres
OpenURL 0.1 (genres) OpenURL 1.0 (format/genres)
Journal Journal
Article Journal
Preprint
Issue
Article
Book Conference
Book item Proceeding
Conference Preprint
Proceeding Book
Book
Book item
Report
Document
Dissertation
Patent
Others can be registered
7. Richer metadata formats
OpenURL 0.1 OpenURL 1.0
Key-value pairs Key-value pairs
Limited pre-set list of XML
elements MARC
Other formats can be
registered
Options for element
sets
New data elements can
be registered
8. Describing “context”
Up to six entities to describe item and context
Referent (item being referenced)
Referrer (site sending request)
Resolver (site request is being sent to)
Requester (user making request)
Referring entity (article containing reference)
Service type (what the resolver should do)
9. New terminology
ContextObject
An information construct that binds a
description of a primary entity -- the
referenced resource -- together with
descriptions of entities that indicate the
context of the reference to the referenced
resource.
10. New terminology
Entity
One of the six possible constituents of a
ContextObject: Referent, Requester,
Referrer, Resolver, ReferringEntity, or
ServiceType
11. New terminology
Format
A concrete method of expression for a class
of information constructs. It is a triple
comprising:
(1) a Physical Representation;
(2) a Constraint Language; and
(3) a set of constraints expressed in that
Constraint Language
12. Example of metadata “Format”
ori:fmt:kev:mtx:journal
The actual set of elements that
To do with have been defined.
OpenURL
Constraint language is a table (matrix).
This is the form used to describe the list
Describing a of possible elements. Other examples
format are DTD or XML Schema.
Physical element representation
is key-encoded- values. E.G.
aulast=Smith
15. Using Format in a sentence
http://example.org/myResolver
ContextObject
?url_ver=z39.88-2003
format
&url_ctx_fmt=ori:fmt:kev:mtx:ctx
&rft_val_fmt=ori:fmt:kev:mtx:journal Metadata
format
&rfr_id=ori:rfr:myid.com:mydb
&rft_id=ori:doi:10.1126/science.275.5304.1320
&rft_id=ori:pmid:9036860
16. New terminology
Transport
A Transport is a network protocol and the
method in which it is used to convey
representations of ContextObjects.
OpenURL is about “transporting”
ContextObjects using HTTP GET and POST.
17. New terminology
By-Value OpenURL
An OpenURL in which all of the metadata
and identifiers for the entities are included as
part of the request.
18. Example of By-Value OpenURL
http://example.org/myResolver
?url_ver=z39.88-2003
&url_ctx_fmt=ori:fmt:kev:mtx:ctx
&rft_val_fmt=ori:fmt:kev:mtx:journal
&rfr_id=ori:rfr:myid.com:mydb
&rft_id=ori:doi:10.1126/science.275.5304.1320
&rft_id=ori:pmid:9036860
&rft.genre=article &atitle=Isolation of a common receptor
for coxsackie B
&rft.title=Science
&rft.aulast=Bergelson
&rft.auinit=J
&rft.date=1997
…
19. New terminology
By-Reference OpenURL
An OpenURL in which the initial request contains a
pointer to the ContextObject being referenced.
Optionally, an entity within a ContextObject can be
pointed to.
The ContextObject can be created as a stand-alone
file on a web server, then referred to. Can reduce
size of web pages and amount of data transferred.
20. Example of By-Value OpenURL
http://example.org/myResolver
?url_ver=z39.88-2003
&url_ctx_fmt=ori:fmt:kev:mtx:ctx
&url_ctx_ref=uri:http://www.example.org/objects/1234.txt
Network location of
ContextObject
Declares that this ContextObject (ctx)
is being passed by reference (ref)
21. ContextObject
Describes item and its context
Uses up to six entities: Referent, Resolver,
Referrer, Requester, Referring Entity and
Service Type
Can be described separately from its
transport
OpenURL is the HTTP GET or POST
transport of a ContextObject
22. OpenURL 0.1 example
http://example.org/myResolver? Resolver
sid=myid:mydb
&id=doi:10.1126/science.275.5304.1320 Referrer
&id=pmid:9036860
&genre=article
&atitle=Isolation of a common receptor
for coxsackie B Referent
&title=Science
&aulast=Bergelson
&auinit=J
&date=1997
&volume=275
&spage=1320
&epage=1323
23. New Entities in OpenURL 1.0
Requester
Allow the identity of the user to be passed to the resolver.
Identification not authentication
Referring Entity
For links from references in other articles
Identity of article where reference found in can be
important context
Don’t create link back to this article
Use information about that article to tailor results
Service Type
Define what “services” to present the user.
Full text, interlibrary loan, abstract record, etc.
24. Why separate ContextObject
from its transport?
Assumptions:
Notion of describing an item and its context has
value beyond simple HTML-based linking
applications of today
Communication between link resolvers
reasonable (see Jenny’s examples)
Using Microsoft SOAP or other protocols can be
more efficient in some applications
Conclusion:
Separating standard into parts allows more
flexible options for future growth.
25. Dealing with complexity
OpenURL 0.1 was simple because it was
hardwired
ContextObjects OpenURL 1.0 is extensible
and flexible and much more abstract
Many options for implementation and
extension can lead to confusion and
interoperability problems
Community Profiles introduced to simplify
implementation
26. San Antonio Profile
Represents view of scholarly information community
List of choices for implementing OpenURL
Framework. For example:
Physical Representations (KEV, XML, etc.)
Constraint languages (Matrix, Schema, etc.)
ContextObject Format
Metadata formats (Journal, Book, etc.)
Transports (HTTP GET and PUT)
Provides a roadmap for building compliant sources
and targets
27. Implementing OpenURL 1.0
Goals
Keep it simple
Use data already available (tagged metadata,
DOI)
Stick to widely accepted approaches
28. Implementing OpenURL 1.0
Recommended approach to take:
Base on San Antonio Profile
Start with key-encoded-values for metadata
Inline representation of keys and values
Tag data elements according to the registered
metadata formats (e.g. ori:fmt:kev:mtx:journal)
Stick to the required entities
Resolver (baseURL)
Referent
Referrer
Create links on-the-fly
31. Building an OpenURL
http://resolver.example.org? Add the referrer
url_ver=z39.88-2003
&url_ctx_fmt=ori:fmt:kev:mtx:ctx
&rfr_id=ori:rfr:publisher.com
Identifier for referrer
OpenURL “Referrer” Domain of
referrer
The tag for
Referrer (rfr) ID
32. Building an OpenURL
http://resolver.example.org? Add the referent
url_ver=z39.88-2003
&url_ctx_fmt=ori:fmt:kev:mtx:ctx
&rfr_id=ori:rfr:publisher.com
&rft_id=ori:doi:10.1126/science.275.5304.1320
Include identifiers
&rft_id=ori:pmid:9036860
33. Building an OpenURL
http://resolver.example.org? Add the referent
url_ver=z39.88-2003
&url_ctx_fmt=ori:fmt:kev:mtx:ctx
&rfr_id=ori:rfr:publisher.com
&rft_id=ori:doi:10.1126/science.275.5304.1320
&rft_id=ori:pmid:9036860
&rft_val_fmt=ori:fmt:kev:mtx:journal Declare the
metadata format
Shows referent (rft) format Indicates we are using key-
(fmt) is by value (val) encoded-values (kev) as
defined in the “journal” matrix
(mtx) in the registry.
34. Building an OpenURL
http://resolver.example.org? Add the referent
url_ver=z39.88-2003
&url_ctx_fmt=ori:fmt:kev:mtx:ctx
&rfr_id=ori:rfr:publisher.com
&rft_id=ori:doi:10.1126/science.275.5304.1320
&rft_id=ori:pmid:9036860
&rft_val_fmt=ori:fmt:kev:mtx:journal
&rft.genre=article Add the metadata
&rft.atitle=Isolation of a common receptor for … elements
&rft.jtitle=Science
&rft.aulast=Bergelson Substitute actual
values for the item
&rft.auinit=J
being referenced
…
35. Where to add OpenURL links
Citation/Abstract display
OpenURL would allow the user’s institution to
control access to alternate copy’s, order through
document delivery, etc.
List of references for article
Must have separately tagged elements
Allow user to access to appropriate copy for
referenced item
38. When to add OpenURL links
Show links if a customer has a link resolver
The customer-specific BaseURL – the link to
their resolver – is primary building block of
OpenURL
Allow customer to indicate they have a
resolver:
Store BaseURL as customer-specific data in an
admin option
Use of a cookie to indicate BaseURL
(http://www.sfxit.com/openurl/cookiepusher.html)
39. References
The OpenURL Framework for Context-Sensitive Services
http://library.caltech.edu/openurl/PubComDocs/StdDocs/Part1-PC-200305
The OpenURL Framework for Context-Sensitive Services – Part 2
http://library.caltech.edu/openurl/PubComDocs/StdDocs/Part2-PC-200305
The San Antonio Level 1 Community Profile (SAP1): Implementation
Guidelines
http://library.caltech.edu/openurl/PubComDocs/StdDocs/SAP1_Guidelines
Registry for the OpenURL Framework - ANSI/NISO Z39.88-2003
http://alcme.oclc.org/openurl/servlet/OAIHandler?verb=ListSets
Cookie Pusher Document
http://www.sfxit.com/openurl/cookiepusher.html
OpenURL 0.1 http://www.sfxit.com/openurl/openurl.html