A link resolver service is a tool that allows users to more easily access subscribed content by resolving links between citations and available full texts. It works by taking an OpenURL containing metadata about a referenced item and using that metadata along with the user's context to determine if the library has access to the full text, then linking directly to it if so. Common issues include errors in linking and content not being available, but solutions involve standards like KBART and unified resource management systems.
The document provides an overview of the National Center for Biomedical Ontology (NCBO) technology including REST web services, the BioPortal ontology repository, NCBO web services, and the BioPortal SPARQL endpoint. Key NCBO web services allow users to search ontologies, access ontology terms and hierarchies, propose term annotations, map between ontologies, and annotate data with ontology terms. The document outlines several NCBO tools and resources available for working with biomedical ontologies.
Presented by Christa Burns
At NEBASE Annual Meeting - East (August 9, 2007, Lincoln, NE) and as a NEBASE Hour (September 5, 2007, online)
OCLC is piloting its new WorldCat Local service that will allow your library to customize WorldCat.org as a solution for local discovery and delivery services. WorldCat Local interoperates with locally maintained services like circulation, resource sharing and resolution to full text to present a locally branded interface to your patrons. Attend this session to learn how this new service works and to see the beta being run at the University of Washington Libraries.
The Biodiversity Heritage Library and bibliographic citations: towards new u...Trish Rose-Sandler
This document discusses the Biodiversity Heritage Library's (BHL) efforts to create a citation repository that would allow users to search and access articles from the BHL. It provides a brief history of related projects like CiteBank. It describes the current capabilities and limitations of accessing citations and full text articles through the BHL. It outlines the next steps needed to fully integrate citations and articles into the BHL by expanding the data model, developing interfaces for adding metadata, and changing how citations and articles are displayed. The goal is to support the Global Names Architecture by facilitating access to taxonomic literature.
Building the new open linked library: Theory and PracticeTrish Rose-Sandler
What tools and services are necessary to build an open linked library and how can we move existing digital library content into an open linked data model and use those tools to repurpose our own content?
The Missing Link-The Evolving Current State of Linked Data for Serials-FallgrenNASIG
Linked data may hold the potential to solve some classic serials dilemmas like latest vs. successive entry, or single vs. multiple records for print and online. How do these hopes mesh with the evolving current state of linked data projects in the commercial and library sector as well as with LC’s Bibframe initiative? The speakers will provide three different perspectives. An “early experimenter” and member of the Bibframe group modeling serials will discuss her experiences and thoughts on future directions. A publisher from a company that has reorganized some of its infrastructure and processes to facilitate linked data will share the goals and provide examples of the benefits of that project. Finally, the head of the U.S. ISSN Center will take an ISSN perspective as well as compare international work modeling serials according to FRBR-OO (object-oriented) with the Bibframe serials modeling effort. Audience input will be solicited in order to provide an exchange of ideas and viewpoints. (moderated by Laurie Kaplan)
ThatQuiz.org is a free online quiz making website that allows users to create, publish and share multiple choice quizzes on various topics. Users can create accounts to save and manage their quizzes, collaborate with others, and view statistics on quiz performances. The site aims to make creating and distributing quizzes a fun and easy process for both personal and educational use.
This prsentation is about the policies and standards in an archival repository, instructions on how to write an effective records management policy the physical make up/provosion of a repository and the equipment and supplies that are being used.
The document provides an overview of the National Center for Biomedical Ontology (NCBO) technology including REST web services, the BioPortal ontology repository, NCBO web services, and the BioPortal SPARQL endpoint. Key NCBO web services allow users to search ontologies, access ontology terms and hierarchies, propose term annotations, map between ontologies, and annotate data with ontology terms. The document outlines several NCBO tools and resources available for working with biomedical ontologies.
Presented by Christa Burns
At NEBASE Annual Meeting - East (August 9, 2007, Lincoln, NE) and as a NEBASE Hour (September 5, 2007, online)
OCLC is piloting its new WorldCat Local service that will allow your library to customize WorldCat.org as a solution for local discovery and delivery services. WorldCat Local interoperates with locally maintained services like circulation, resource sharing and resolution to full text to present a locally branded interface to your patrons. Attend this session to learn how this new service works and to see the beta being run at the University of Washington Libraries.
The Biodiversity Heritage Library and bibliographic citations: towards new u...Trish Rose-Sandler
This document discusses the Biodiversity Heritage Library's (BHL) efforts to create a citation repository that would allow users to search and access articles from the BHL. It provides a brief history of related projects like CiteBank. It describes the current capabilities and limitations of accessing citations and full text articles through the BHL. It outlines the next steps needed to fully integrate citations and articles into the BHL by expanding the data model, developing interfaces for adding metadata, and changing how citations and articles are displayed. The goal is to support the Global Names Architecture by facilitating access to taxonomic literature.
Building the new open linked library: Theory and PracticeTrish Rose-Sandler
What tools and services are necessary to build an open linked library and how can we move existing digital library content into an open linked data model and use those tools to repurpose our own content?
The Missing Link-The Evolving Current State of Linked Data for Serials-FallgrenNASIG
Linked data may hold the potential to solve some classic serials dilemmas like latest vs. successive entry, or single vs. multiple records for print and online. How do these hopes mesh with the evolving current state of linked data projects in the commercial and library sector as well as with LC’s Bibframe initiative? The speakers will provide three different perspectives. An “early experimenter” and member of the Bibframe group modeling serials will discuss her experiences and thoughts on future directions. A publisher from a company that has reorganized some of its infrastructure and processes to facilitate linked data will share the goals and provide examples of the benefits of that project. Finally, the head of the U.S. ISSN Center will take an ISSN perspective as well as compare international work modeling serials according to FRBR-OO (object-oriented) with the Bibframe serials modeling effort. Audience input will be solicited in order to provide an exchange of ideas and viewpoints. (moderated by Laurie Kaplan)
ThatQuiz.org is a free online quiz making website that allows users to create, publish and share multiple choice quizzes on various topics. Users can create accounts to save and manage their quizzes, collaborate with others, and view statistics on quiz performances. The site aims to make creating and distributing quizzes a fun and easy process for both personal and educational use.
This prsentation is about the policies and standards in an archival repository, instructions on how to write an effective records management policy the physical make up/provosion of a repository and the equipment and supplies that are being used.
This document summarizes Corey Harper's presentation on metadata and linked data. It discusses publishing and consuming linked bibliographic and cultural heritage data from libraries, archives, and museums. Examples are given of projects linking data from different institutions to provide richer context and enable new discovery experiences for users. Emerging roles for metadata experts are described, including curating linked open data on the web and collaborating with developers.
This document summarizes how libraries use publisher-provided metadata to provide access to content. It describes how metadata is used in the library catalog, link resolvers, and discovery systems. Publisher metadata must be accurate and distributed to various library systems and standards to effectively support discovery and access for users.
An introduction to the Joint Information Systems Committee Resource Discovery iKit. Includes a look at controlled vocabularies declared in the Resource Discovery Framework (RDF)/Simple Knowledge Organisation System (SKOS) and wikipedia entries. Presented by Tony Ross at the CILIPS Centenary Conference Branch and Group Day which took place 5 Jun 2008.
This document provides an introduction to the semantic web and library linked data. It discusses how library data is currently siloed but moving towards being published as linked open data using semantic web standards. Key points covered include the principles of linked data using URIs and RDF triples, examples of library linked data projects, and how RDA is being developed to support linked data. The goal is to make library data more accessible and useful by integrating it into the larger web of data.
Semantic Web Technologies: Changing Bibliographic Descriptions?Stuart Weibel
Keynote presentation at the North Atlantic Health Science Library meeting, October 26, 2009.
An introduction to semantic web technologies and their relationship to libraries and bibliographic data.
Stuart Weibel, Senior Research Scientist, OCLC Research
The document summarizes the Smithsonian Libraries' efforts to build a new open linked library by exposing their digital collections as linked open data using semantic web standards. They analyzed their existing digital content to identify which data elements could be exposed as linked data. They migrated their website to Drupal to natively support RDFa and allow querying between systems. They provided examples of how book metadata and records from their Taxonomic Literature database would be represented as linked data.
This document discusses library linked data and the future of bibliographic control. It begins by asking what library linked data means and why it is important now. To combine the best of libraries and the web, metadata must be on the web and open for others to use. The principles of linked data are described, including using URIs, HTTP URIs, providing useful information in RDF, and including links to other URIs. The building blocks of linked data like RDF and triples are explained. Examples of existing library linked data projects are provided. The BIBFRAME initiative to develop a new framework to manage library data as linked data is outlined.
Agile resources on the open web …. a global digital libraryJisc
The document summarizes a presentation about JISC's efforts to create an open, global digital library and infrastructure for accessing educational resources. It discusses JISC's role in funding content providers and shared services; principles for the infrastructure including being integrated, interoperable, and sustainable; creating open metadata and linking datasets; and a vision of students and researchers having easy access to integrated library, museum and archive resources through a collaborative framework.
Library as Place, Place as Library: Duality and the Power of CooperationKaren S Calhoun
This talk, delivered at the February 2010 OCLC Regional Council Seminar in Auckland NZ, explores the turbulent conditions in which libraries are evolving as both places and virtual spaces on the Web. How are these conditions driving change in library collections, catalogues, and cooperative systems? What are OCLC's strategies for helping today's libraries gain visibility and impact through cooperation and data sharing? If we were building a system for library cooperation today, what would it look like?
This document discusses metadata normalization and linked open data in libraries. It provides an overview of discovery systems like Primo and describes challenges in normalizing metadata from different sources and mapping them to a common schema for use in a single system like Primo. It also discusses the benefits of applying linked open data principles to library data and describes some ongoing work towards applying these principles.
This document summarizes the Cambridge Open Metadata project. The project aims to release Cambridge University Library's bibliographic records as open data in various formats like XML, RDF, and JSON. The goals are to drive innovation, provide value for taxpayer money, and promote the library's collections. Key activities include converting records to RDF, adding subject headings from external sources, and determining appropriate open licenses for records from different vendors. The project hopes to make more of the library's data reusable and help non-library developers build new tools and services.
Library discovery: past, present and some futureslisld
A presentation at the NISO virtual conference on Webscale Discovery Services, 20 November 2013.
Considers some of the issues that have led to the adoption of these services, and some future directions.
Distinguishes between discovery (providing a library destination) and discoverability (making stuff discoverable elsewhere).
Porting Library Vocabularies to the Semantic Web - IFLA 2010Bernard Vatant
The document discusses opportunities for libraries to contribute their established vocabularies and classification systems to the Semantic Web. It outlines steps libraries can take to audit, publish, and integrate their vocabularies according to Semantic Web standards. This will help bring necessary structure and organization to the Web of data by leveraging libraries' proven heritage in developing controlled vocabularies.
This webinar is about the Open Source software that is available to supplement your library system, regardless of whether you are using an Open Source Library System like Koha or Evergreen or a proprietary system like Millennium, CARL, or Horizon.
Software that dramatically extends and expands the capabilities of your library system software fall into two main categories: discovery interface and metasearch. While other products (e.g. content management systems) may integrate with your ILS to some degree, we will focus our attention on discovery and metasearch tools, how they work and who is using them.
The Power of Sharing Linked Data: Bibliothekartag 2014Richard Wallis
The document discusses OCLC's efforts to share library data as linked open data on the web. It describes OCLC releasing WorldCat data including 311 million records as linked data, using schemas like Schema.org and linking to other sources like VIAF. It also discusses the release of 197 million linked data work descriptions from WorldCat in April 2014. The goal is to make library data part of the web by giving search engines and users what they want, like structured data at web scale with identifiers and links.
Fuller Disclosure: Getting More Collections into the Network Flowkramsey
The document discusses how libraries can make more of their collections discoverable by being where users search for information online. It recommends focusing on collection-level descriptions rather than exhaustive item-level metadata. Libraries should digitize materials, share metadata across systems, and engage users to add descriptive information over time. The goal is to expose hidden collections and get them integrated into the online information landscape where discovery happens.
Open for Business Open Archives, OpenURL, RSS and the Dublin CoreAndy Powell
UKOLN is supported by various open standards and protocols to facilitate digital information management, including OpenURL, RSS, Dublin Core, and the OAI Protocol for Metadata Harvesting. Andy Powell from UKOLN gave a presentation on using these standards to integrate resources from multiple content providers and enable user-focused discovery and access across heterogeneous collections. The presentation provided an overview of each standard and how they address issues like joining up discovery services with delivery of appropriate copies.
Wnl 122 towards social sementic by samhati soorKishor Satpathy
Paper Presented during International Conference on What’s next in libraries? Trends, Space, and partnerships held during January 21-23, 2015 at NIT Silchar, Assam. It is being jointly organized by NIT Silchar, in association with its USA partner the Mortenson Center for International Library Programs, University of Illinois at Urbana-Champaign.
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
This session will demystify (generative) AI by exploring its workings as an advanced statistical modelling tool (suitable for any level of technical knowledge). Not only will this session explain the technological underpinnings of AI, it will also address concerns and (long-term) requirements around ethical and practical usage of AI. This includes data preparation and cleaning, data ownership, and the value of data-generated - but not owned - by libraries. It will also discuss the potentials for (hypothetical) use cases of AI in collections environments and making collections data AI-ready; providing examples of AI capabilities and applications beyond chatbots.
CATH DISHMAN, CENYU SHEN,
KATHERINE STEPHAN
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
This document summarizes Corey Harper's presentation on metadata and linked data. It discusses publishing and consuming linked bibliographic and cultural heritage data from libraries, archives, and museums. Examples are given of projects linking data from different institutions to provide richer context and enable new discovery experiences for users. Emerging roles for metadata experts are described, including curating linked open data on the web and collaborating with developers.
This document summarizes how libraries use publisher-provided metadata to provide access to content. It describes how metadata is used in the library catalog, link resolvers, and discovery systems. Publisher metadata must be accurate and distributed to various library systems and standards to effectively support discovery and access for users.
An introduction to the Joint Information Systems Committee Resource Discovery iKit. Includes a look at controlled vocabularies declared in the Resource Discovery Framework (RDF)/Simple Knowledge Organisation System (SKOS) and wikipedia entries. Presented by Tony Ross at the CILIPS Centenary Conference Branch and Group Day which took place 5 Jun 2008.
This document provides an introduction to the semantic web and library linked data. It discusses how library data is currently siloed but moving towards being published as linked open data using semantic web standards. Key points covered include the principles of linked data using URIs and RDF triples, examples of library linked data projects, and how RDA is being developed to support linked data. The goal is to make library data more accessible and useful by integrating it into the larger web of data.
Semantic Web Technologies: Changing Bibliographic Descriptions?Stuart Weibel
Keynote presentation at the North Atlantic Health Science Library meeting, October 26, 2009.
An introduction to semantic web technologies and their relationship to libraries and bibliographic data.
Stuart Weibel, Senior Research Scientist, OCLC Research
The document summarizes the Smithsonian Libraries' efforts to build a new open linked library by exposing their digital collections as linked open data using semantic web standards. They analyzed their existing digital content to identify which data elements could be exposed as linked data. They migrated their website to Drupal to natively support RDFa and allow querying between systems. They provided examples of how book metadata and records from their Taxonomic Literature database would be represented as linked data.
This document discusses library linked data and the future of bibliographic control. It begins by asking what library linked data means and why it is important now. To combine the best of libraries and the web, metadata must be on the web and open for others to use. The principles of linked data are described, including using URIs, HTTP URIs, providing useful information in RDF, and including links to other URIs. The building blocks of linked data like RDF and triples are explained. Examples of existing library linked data projects are provided. The BIBFRAME initiative to develop a new framework to manage library data as linked data is outlined.
Agile resources on the open web …. a global digital libraryJisc
The document summarizes a presentation about JISC's efforts to create an open, global digital library and infrastructure for accessing educational resources. It discusses JISC's role in funding content providers and shared services; principles for the infrastructure including being integrated, interoperable, and sustainable; creating open metadata and linking datasets; and a vision of students and researchers having easy access to integrated library, museum and archive resources through a collaborative framework.
Library as Place, Place as Library: Duality and the Power of CooperationKaren S Calhoun
This talk, delivered at the February 2010 OCLC Regional Council Seminar in Auckland NZ, explores the turbulent conditions in which libraries are evolving as both places and virtual spaces on the Web. How are these conditions driving change in library collections, catalogues, and cooperative systems? What are OCLC's strategies for helping today's libraries gain visibility and impact through cooperation and data sharing? If we were building a system for library cooperation today, what would it look like?
This document discusses metadata normalization and linked open data in libraries. It provides an overview of discovery systems like Primo and describes challenges in normalizing metadata from different sources and mapping them to a common schema for use in a single system like Primo. It also discusses the benefits of applying linked open data principles to library data and describes some ongoing work towards applying these principles.
This document summarizes the Cambridge Open Metadata project. The project aims to release Cambridge University Library's bibliographic records as open data in various formats like XML, RDF, and JSON. The goals are to drive innovation, provide value for taxpayer money, and promote the library's collections. Key activities include converting records to RDF, adding subject headings from external sources, and determining appropriate open licenses for records from different vendors. The project hopes to make more of the library's data reusable and help non-library developers build new tools and services.
Library discovery: past, present and some futureslisld
A presentation at the NISO virtual conference on Webscale Discovery Services, 20 November 2013.
Considers some of the issues that have led to the adoption of these services, and some future directions.
Distinguishes between discovery (providing a library destination) and discoverability (making stuff discoverable elsewhere).
Porting Library Vocabularies to the Semantic Web - IFLA 2010Bernard Vatant
The document discusses opportunities for libraries to contribute their established vocabularies and classification systems to the Semantic Web. It outlines steps libraries can take to audit, publish, and integrate their vocabularies according to Semantic Web standards. This will help bring necessary structure and organization to the Web of data by leveraging libraries' proven heritage in developing controlled vocabularies.
This webinar is about the Open Source software that is available to supplement your library system, regardless of whether you are using an Open Source Library System like Koha or Evergreen or a proprietary system like Millennium, CARL, or Horizon.
Software that dramatically extends and expands the capabilities of your library system software fall into two main categories: discovery interface and metasearch. While other products (e.g. content management systems) may integrate with your ILS to some degree, we will focus our attention on discovery and metasearch tools, how they work and who is using them.
The Power of Sharing Linked Data: Bibliothekartag 2014Richard Wallis
The document discusses OCLC's efforts to share library data as linked open data on the web. It describes OCLC releasing WorldCat data including 311 million records as linked data, using schemas like Schema.org and linking to other sources like VIAF. It also discusses the release of 197 million linked data work descriptions from WorldCat in April 2014. The goal is to make library data part of the web by giving search engines and users what they want, like structured data at web scale with identifiers and links.
Fuller Disclosure: Getting More Collections into the Network Flowkramsey
The document discusses how libraries can make more of their collections discoverable by being where users search for information online. It recommends focusing on collection-level descriptions rather than exhaustive item-level metadata. Libraries should digitize materials, share metadata across systems, and engage users to add descriptive information over time. The goal is to expose hidden collections and get them integrated into the online information landscape where discovery happens.
Open for Business Open Archives, OpenURL, RSS and the Dublin CoreAndy Powell
UKOLN is supported by various open standards and protocols to facilitate digital information management, including OpenURL, RSS, Dublin Core, and the OAI Protocol for Metadata Harvesting. Andy Powell from UKOLN gave a presentation on using these standards to integrate resources from multiple content providers and enable user-focused discovery and access across heterogeneous collections. The presentation provided an overview of each standard and how they address issues like joining up discovery services with delivery of appropriate copies.
Wnl 122 towards social sementic by samhati soorKishor Satpathy
Paper Presented during International Conference on What’s next in libraries? Trends, Space, and partnerships held during January 21-23, 2015 at NIT Silchar, Assam. It is being jointly organized by NIT Silchar, in association with its USA partner the Mortenson Center for International Library Programs, University of Illinois at Urbana-Champaign.
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
This session will demystify (generative) AI by exploring its workings as an advanced statistical modelling tool (suitable for any level of technical knowledge). Not only will this session explain the technological underpinnings of AI, it will also address concerns and (long-term) requirements around ethical and practical usage of AI. This includes data preparation and cleaning, data ownership, and the value of data-generated - but not owned - by libraries. It will also discuss the potentials for (hypothetical) use cases of AI in collections environments and making collections data AI-ready; providing examples of AI capabilities and applications beyond chatbots.
CATH DISHMAN, CENYU SHEN,
KATHERINE STEPHAN
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
Christina Dinh Nguyen, University of Toronto Mississauga Library
In the world of digital literacies, liaison and instructional librarians are increasingly coming to terms with a new term: algorithmic literacy. No matter the liaison or instruction subjects – computer science, sociology, language and literature, chemistry, physics, economics, or other – students are grappling with assignments that demand a critical understanding, or even use, of algorithms. Over the course of this session, we’ll discuss the term ‘algorithmic literacies,’ explore how it fits into other digital literacies, and see why it as a curriculum might belong at your library. We’ll also look at some examples of practical pedagogical methods you can implement right away, depending on what types of AL lessons you want to teach, and who your patrons are. Lastly, we’ll discuss how librarians should view themselves as co-learners when working with AL skills. This session seeks to bring together participants from across the different libraries, with diverse missions/vision/mandates, to explore ways we can all benefit from teaching AL. If time permits, we may discuss how text and data librarians (functional specialists) can support the development of this curriculum.
David Pride, The Open University
In this paper, we present CORE-GPT, a novel question- answering platform that combines GPT-based language models and more than 32 million full-text open access scientific articles from CORE. We first demonstrate that GPT3.5 and GPT4 cannot be relied upon to provide references or citations for generated text. We then introduce CORE-GPT which delivers evidence-based answers to questions, along with citations and links to the cited papers, greatly increasing the trustworthiness of the answers and reducing the risk of hallucinations.
Cath Dishman, Cenyu Shen, Katherine Stephan
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
This plenary panel will discuss the problems of “predatory” publishing and what, if anything, publishers, our community and researchers can do to try and help minimise their abundancy/impact.
eth Montague-Hellen, Francis Crick Institute, Katie Fraser, University of Nottingham
Open Access is a foundational topic in Scholarly Communications. However, when information professionals and publishers talk about its future, it is nearly always Gold open access we discuss. Green was seen as the big solution for providing access to those who couldn’t afford it. However, publishers have protested that Green destroys their business models. How true is this, and are we even all talking the same language when we talk about Green?
Chris Banks, Imperial College London, Caren Milloy, Jisc,
Transitional agreements were developed in response to funder policy and institutional demand to constrain costs and facilitate funder compliance. They have since become the dominant model by which UK research outputs are made open access. In January 2023, Jisc instigated a critical review of TAs and the OA landscape to provide an evidence base to inform a conversation on the desired future state of research dissemination. This session will discuss the key findings of the review and its impact on a sector-wide consultation and concrete actions in the UK and beyond.
Michael Levine-Clark, University of Denver, Jason Price, SCELC Library Consortium
As transformative agreements emerge as a new standard, it is critical for libraries, consortia, publishers, and vendors to have consistent and comprehensive data – yet data around publication profiles, authorship, and readership has been shown to be highly variable in availability and accuracy. Building on prior research around frameworks for assessing the combined value of open publishing and comprehensive read access that these deals provide, we will address multi-dimensional perspectives to the challenges that the industry faces with the dissemination, collection, and analysis of data about authorship, readership, and value.
Hylke Koers, STM Solutions
Get Full Text Research (GetFTR) launched in 2020 with the objective of streamlining discovery and access of scholarly content in the many tools that researchers use today, such as Dimensions, Semantic Scholar, Mendeley, and many others. It works equally well for open access content as it does for subscription-based content, providing researchers with recognizable buttons and indicators to get them to the most up-to-date version of content with minimal effort. Currently, around 30,000 OA articles are accessed every day via GetFTR links.
Gareth Cole, Loughborough University, Adrian Clark, Figshare
Researchers face more pressure to share their research data than ever before. Owing to a rise in funder policies and momentum towards more openness across the research landscape. Although policies for data sharing are in place, engagement work is undertaken by librarians in order to ensure repository uptake and compliance.
We will discuss a particular strategy implemented at Loughborough University that involved the application of conceptual messaging frameworks to engagement activities in order to promote and encourage use of our Figshare-powered repository. We will showcase the rationale behind the adoption of messaging frameworks for library outreach and some practical examples.
Mark Lester, Cardiff Metropolitan University
This talk will outline how a completely accidental occurrence led to brand new avenues for open research advocacy and reasons for being. This advocacy has occurred within student communities such as trainee teachers, student psychologists and (especially) those soon losing access to subscription-based library content. Alongside these new forms of advocacy, these ethical example of AI use cases has begun to form a cornerstone of directly connecting the work of the library to new technology.
Simon Bell, Bristol University Press
The UN SDG Publishers Compact, launched in 2020, was set up to inspire action among publishers to accelerate progress to achieve the Sustainable Development Goals by 2030, asking signatories to develop sustainable practices, act as champions and publish books and journals that will “inform, develop and inspire action in that direction”.
This Lightning Talk will discuss how our new Bristol University Press Digital has been developed as part of our mission to contribute a meaningful and impactful response to this call to action as well as the global social challenges we face.
Using thematic tagging to create uniquely curated themed eBook collections around the Global Social Challenges, Bristol University Press Digital responds directly to the need to provide the scholarly community access to a comprehensive range SDG focussed content while minimising time and resource at the institution end in collating content and maintaining collection relevance to rapidly evolving themes
Jenni Adams, University of Sheffield, Ric Campbell, University of Sheffield
Academic researchers are becoming increasingly aware of the need to make data and software FAIR in order to support the sharing and reuse of non-publication outputs. Currently there is still a lack of concise and practical guidance on how to achieve this in the context of specific data types and disciplines.
This presentation details recent and ongoing work at the University of Sheffield to bridge this gap. It will explore the development of a FAIR resource with specialist guidance for a range of data types and will examine the planned development of this project during the period 2023-25
TASHA MELLINS-COHEN
COUNTER & Mellins-Cohen Consulting, JOANNA BALL
DOAJ, YVONNE CAMPFENS
OA Switchboard,
ADAM DER, Max Planck Digital Library
Community-led organizations like DOAJ (Directory of Open Access Journals), COUNTER (the standard for usage metrics) and OA Switchboard (information exchange for OA publications) are committed to providing reliable, not-for-profit services and standards essential for a well-functioning global research ecosystem. These organizations operate behind the scenes, with low budgets and limited staffing – no salespeople, marketing teams, travel budgets, or in-house technology support. They collaborate with one another and with bigger infrastructure bodies like Crossref and ORCID, creating the foundations on which much scholarly infrastructure relies.
These organizations deliver value through open infrastructure, data and standards, and naturally services and tools have been built by commercial and not-for-profit groups that capitalize on their open, interoperable data and services – many of which you are likely to recognize and may use on a regular basis.
Hear from the Directors of COUNTER, DOAJ and OA Switchboard, as well as a library leader, on the role of these organizations, the challenges they face and why support from the community is essential to their sustainability.
CAMILLE LEMIEUX
Springer Nature
What is the current state of diversity, equity, and inclusion in the scholarly publishing community? It's time to take a thorough look at the 2023 global Workplace Equity (WE) Survey results. The C4DISC coalition conducted the WE Survey to capture perceptions, experiences, and demographics of colleagues working at publishers, associations, libraries, and many more types of organizations in the global community. Four key themes emerged from the 2023 results, which will be compared to the findings from the first WE Survey conducted in 2018. Recommendations for actions organisations can consider within their contexts will be proposed and discussed.
Rob Johnson, Research Consulting
Angela Cochran, American Society of Clinical Oncology
Gaynor Redvers-Mutton, Biochemical Society
Since 2015, the number of self-published learned societies in the UK has decreased by over a third, with the remaining societies experiencing real-term revenue declines. All around the world, society publishers are struggling with increased competition from commercial publishers and the rise of open access business models that reward quantity over quality. We will delve into the distinctive position of societies in research, examine the challenges confronting UK and US learned society publishers, and explore actionable steps for libraries and policymakers to support the continued relevance of learned society publishers in the evolving scholarly landscape.
Simon Bell, Clare Hooper, Katharine Horton, Ian Morgan
Over the last few years we have witnessed a seismic shift in the scholarly ecosystem. Three years since outset of the COVID pandemic and the establishment UN Publishers Compact, this is discussion-led presentation will look at how four UK Universities Presses have adopted a consultative and collaborative approach on projects to support their institutional missions, engage with the wider scholarly community while building on a commitment to make a meaningful difference to society.
This panel discussion will combine the perspectives of four UK based university presses, all with distinct identities and varied publishing programs drawn from humanities, arts and social sciences, yet with a shared recognition and value of the importance to collaborate and co-operate on a shared vision to support accessibility and inclusivity within the wider scholarly community and maintain a rich bibliodiversity.
While research support teams are generally small and specialist in nature, an increased demand of its service has been observed across the sector. This is particularly true for teaching-intensive institutions. As a pilot to expand research support across ARU library, the library graduate trainee was seconded to the research services team for a month. This dialogue between the former trainee and manager will discuss what the experience and outcomes of the secondment were from different perspectives. The conversation will also explore the exposure Library and Information Studies students have to research services throughout their degree.
TIM FELLOWS & EMILY WILD, Jisc
Octopus.ac is a UKRI funded research publishing model, designed to promote best practice. Intended to sit alongside journals, Octopus provides a space for researcher collaboration, recording work in detail, and receiving feedback from others, allowing journals to focus on narrative.
The platform removes existing barriers to publishing. It’s an entirely free, open space for researchers, without editorial and pre-publication peer review processes. The only requirement for authors is a valid ORCiD ID. Without barriers, Octopus must provide feedback mechanisms to ensure the community can self-moderate. During this session, we’ll explore Octopus’ aims to foster a collaborative environment and incentivise quality.
David Parker, Publisher and Founder, Lived Places Publishing
Dr. Kadian Pow, Lecturer in Sociology and Black Studies & LPP Author, Birmingham City University
Natasha Edmonds, Director, Publisher and Industry Strategy, Clarivate
Library patrons want to search for and locate authors by particular identity markers, such as gender identification, country of origin, sexual orientation, nature of disability, and the many intersectional points that allow an author to express a point-of-view. Artificial Intelligence, skilled web researchers, and data scientists in general struggle to achieve accuracy on single identity markers, such as gender. And what right does anybody have to affix identity metadata to an author other than the author theirselves? And what of the risks in disseminating author identity metadata in electronic distribution platforms and in library catalog systems? Can a "fully informed" author even imagine all the possible misuses of their identity metadata?
More from UKSG: connecting the knowledge community (20)
How to Download & Install Module From the Odoo App Store in Odoo 17Celine George
Custom modules offer the flexibility to extend Odoo's capabilities, address unique requirements, and optimize workflows to align seamlessly with your organization's processes. By leveraging custom modules, businesses can unlock greater efficiency, productivity, and innovation, empowering them to stay competitive in today's dynamic market landscape. In this tutorial, we'll guide you step by step on how to easily download and install modules from the Odoo App Store.
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
How to Setup Default Value for a Field in Odoo 17Celine George
In Odoo, we can set a default value for a field during the creation of a record for a model. We have many methods in odoo for setting a default value to the field.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
4. The
stone age?
http://www.flickr.com/photos/thaqela/6774236608/
5. WHAT IS A LINK RESOLVER?
R.H.C Davis. A History of Medieval Europe.
2nd ed. Published Harlow : Longman 1988.
6.
7. Carry out
a search
Look up reference in
library catalogue
and/or website
Determine if
the library
subscribes
Search Full
for the text
article article
8. Carry out a Link Full
search text
Resolver
article
10. ANSI/NISO Z39.88 THE OPEN-URL FRAMEWORK FOR
CONTEXT-SENSITIVE SERVICES:
The OpenURL Framework Standard defines an
architecture for creating OpenURL Framework
Applications. An OpenURL Framework Application is a
networked service environment, in which packages of
information are transported over a network. These
packages have a description of a referenced resource at
their core, and they are transported with the intent of
obtaining context-sensitive services pertaining to the
referenced resource. To enable the recipients of these
packages to deliver such context-sensitive services, each
package describes the referenced resource itself, the
network context in which the resource is referenced,
and the context in which the service request takes
place.
http://www.niso.org/
11. WIKIPEDIA SAYS:
OpenURL is a standardized format of Uniform Resource
Locator (URL) intended to enable Internet users to
more easily find a copy of a resource that they are
allowed to access. Although OpenURL can be used with
any kind of resource on the Internet, it is most heavily
used by libraries to help connect patrons to subscription
content.
http://en.wikipedia.org/wiki/OpenURL
12. WIKIPEDIA SAYS:
A knowledge base or knowledgebase (also KB or kb) is
a special kind of database for knowledge management.
A knowledge base is an information repository that
provides a means for information to be collected,
organized, shared, searched and utilized. It can be either
machine-readable or intended for human use.
http://en.wikipedia.org/wiki/Knowledge_base
13.
14.
15.
16. WHO’S INVOLVED?
libraries
Link resolver Content
suppliers providers
25. COMMONLY REPORTED ISSUES
Errors in linking
Arrive at an error page
Arrive at an incorrect page
Not arriving at article level
Access issues
Content not available for a particular
volume/issue/year
Content not available at all!
Missing
resource – why isn’t a title/package
appearing in FindIt (the link resolver)?
Hello.My name is Siobhán Burke and I worked in the Electronic Resources team at the University of Manchester Library for 4 years, first as an assistant for 2 years before taking over the team. So my experience and everything I want to share with you today is based on my own experience of learning on the job, as I certainly wasn’t taught about knowledge bases and link resolvers at my library school. [CLICK]
That’s essentially what a link resolver doesbut what is it and why? I’ll attempt to answer WHY we have link resolvers first, then cover what they do in more specific detail before saying more about how I managed the system at Manchester.[CLICK]
Why do we need a link resolver?Thinking about what we had before...[CLICK]
Life before a link resolver? Well perhaps the situation was not quite so archaic as the stone age with cavemen but it was certainly tedious.[CLICK]
Traditional research involved finding information and being led onto other information referenced by the author or authors. This is just an image of one of my own old undergraduate history books. Just in case anyone is interested and because I am a pedantic librarian... [CLICK] there’s the reference. Quite a nice overview text of medieval European History.So if you wanted to find these other resources, you’d have to look each one up in turn using the library catalogue, which thankfully in my time was electronic. But with the advent of more and more electronic resources, both journals and databases in the first instance, the mechanisms for the way research was carried out was changing and perhaps instead of finding a list in a book like this you would find a list of potential resources in an Abstracting and Indexing database such as Web of Knowledge, Scopus or PubMed. After carrying out a search using one of these services, you might get a list like this: [CLICK]
And this is where the need for a link resolver really came in. Pre-link resolver, you would [CLICK]
Carry out your search, look up each of the references you wanted to find, [CLICK] check the library catalogue’s holdings, sometimes you would have had to go and search a separate list of journals on the library’s website somewhere, [CLICK] just to see if the library subscribed, and only then link through to the service. Of course, you would have only linked through to the homepage of the journal. [CLICK] You would still have to carry out a further search using the citation information from your A&I database results before [CLICK] finally getting to the full text.[CLICK]
But with the introduction of a link resolver, those 3 processes in the middle were replaced by that system. Not only was it checking your library’s holdings to see if full-text was available, but it was also providing a link directly to the full text of the resource, and not just the landing page but to article level full text. [CLICK]Overall, the link resolver enabled a seamless process to the user, bridging the gap between discovering a resource to reading the full-text.[CLICK]
We now know what its purpose is but what actually is a link resolver?Simply, I guess it’s a system that makes use of OpenURL in conjunction with a knowledgebase.So what is OpenURL? And what is a knowledgebase?[CLICK]
That’s an abstract from the NISO standard of the OpenURL framework but frankly the definition from that wonderful librarian’s tool Wikipedia is more user friendly[CLICK]
[Read first sentence][CLICK]
So a Knowledgebase is essentially a database and in the context of the link resolvers we are talking about, they are very big and complex databases. Maintaining the knowledgebase is also where the main work from a librarian’s perspective occurs.[CLICK]
Here’s an example of a link resolver menu from Manchester library’s perspective as provided by SFX. In this instance I’m looking for an article from this Journal, from 2004 and I’m presented with 3 different service provider options. As a user, there are various reasons that determine which one at that point I would choose. I’ll touch on that again later.But if we wanted a different article from a different year, say 2008, then this is the menu that would be presented to a user.[CLICK]
So same title, but because we have chosen a different year, I am now only presented with 2 options. And that’s the knowledgebase informing what information should populate the menu that the user sees.[CLICK]
But when you see the generic page for this journal, where I haven’t pre-chosen an article, you will see that the 3 service providers are present again but also [CLICK] displaying different coverage information. And it is that coverage information along with being activated in the first place, that determine if a service provider appears in a menu or not. So it is vital that the coverage information is correct. If it’s not correct a user may not find full-text or choose another copy which effect the usage for that service provider, for that journal, for that institution.So getting things right with a link resolver is crucial and involves 2 other main stakeholders in making that right.[CLICK]
These are the key players in this stakeholdertriangle. Libraries can vary but are commonly higher education libraries due to the costs of having a system like this. Content providers include publishers of scholarly content. And Link resolvers are as the name suggests.[CLICK]
[CLICK]
As I mentioned before, the knowledgebase is largely where the work is done by a librarian. At Manchester we are not a hosted service, so we have other tasks to carry out such as maintaining our server but that’s really only once a month. In a small team of 2.5 people, we maintain this knowledgebase along with running a helpdesk for electronic resources and other tasks. But maintaining the knowledgebase with 100% accuracy could easily employ more people.So how do you manage a knowledgebase and how deos it work in practice? As I’ve said before, I can only account for how we operate in Manchester but the best way to explain is to show you.[CLICK]
So having logged in....This is the first screen you see showing all the functions available to you, depending on your role. I’m logged in as an administrator which is why I all them appear available to me. There are lots of possibilities here to do things such as KBUpdate is where you manage your knowledgebase and software updates, control who has access under Administration. But the main areas that you use on regular basis are these top 2 sections; KBManager and KBTools. KBManager would be daily, hourly even. Anything else generally has been done when you got started such as customising the interface your users see.[CLICK]
So looking more closely at this section, KB Manager, and again really, on a very regular basis, only targets and Objects are used. Sources are your A&I databases, also now any publisher who can offer OpenURL links from their site, for example, the citations contained in articles can now contain links to that content, regardless of who the publisher of that material is and again this has become the standard and expected by users and when those links are not there, then it is likely to have an impact on whether that user decides to get hold of that article. Other sources include Google Scholar, a favourite among students, and your own library catalogues, and it is only this one that I have had to configure locally, the rest are added for us.So going through the terminology here, a Source is that which generates the OpenURL from the bibliographical information it has which then links via OpenURL to produce an SFX or FindIt menu as it’s branded, linking through to full text on a Target.Targets are your content packages, top level groupings from publishers or aggregators. Objects are the individual records of a particular resource, be it a single ejournal or ebook. It is only when the two entities are combined , i.e. This journal from this service provider that an object portfolio, SFX’s terminology, is presented to a user in an SFX menu.So where do you start? The easiest scenario might be that the library has purchased a subscription to one new journal title. Sounds easy. And often it is easy. But there can often be issues. Hopefully you know the ISSN and you simply look that up. Click on Objects and search for 1475-679X or 0021-8456[CLICK]
So this is the same journal from before. I’m using our test instance so that I can’t do any damage to the live service and also none of the options are activated. So what you are looking at now are all the possible options for who provides access to this journal. And according to this, there are 65 possible options. But this includes different service types: GetTOC -Tables of Content, Get Document Delivery Get Fulltxt, there are others such as GetSelected Full text, Get Abstract etc. At Manchester, we used to activate some of those options, but in reality, if a user sees an option presented to them on a FindIt menu, then they are expecting to get to full text. They would be very annoyed to only get to an abstract. In reality, we really only use GetFulltext apart some other services that are offered alongside full text link in the FindIt menu. So from your options, you can narrow them down to full text, wish I could do this in reality, of course. So hopefully you know who your access to the resource is coming from. Often you do, it’ll be Taylor & Francis, OUP, Science Direct, well known major players. But sometimes it’s not clear if the publisher is new to you and may be a little obscure. That can be for reasons local to us at Manchester but can also be due to how the options are entered on SFX.But let’s assume you do know the provider. Now this journal appeared before but who can remember the 3 providers which were activated? There’s a prize for the first person to name all three!Ok, so I’ll go with the JSTOR option . The list is alphabetical so onto the next page and there it is. JSTOR Arts & Sciences 4 and it has the correct coverage. But if you did need to change it, you simply click on E for edit. Opens a new window where you can do various things, most commonly alter coverage information. Usually I just copy and paste this little script and edit the details as necessary or you can click here and input the information. But it’s fine so we will turn it on.All you do is click here. You probably can’t see but there is an invisible tick, I click on it and it’s now activated. To check that all works fine, just click the SFX symbol, the FindIt menu opens and away you go to the service provider. You’ll have to forgive the state of this test interface though.But of course we activate more than 1 title at a time, we can be activating thousands of titles, especially ebooks as we catch up with all of the new packages being added to SFX’s knowledgebase.So I know that we subscribe to all of JSTOR Arts & Sciences 4 so I’ll show you quickly how easy it to activate that. From here, you need only click on the package name.
This takes you back up the hierarchy to Target level. And you again, click to tick, go into S for service level. This is where I mentioned before about GetAbstract/TOC etc but in this case there is only one option, so you click again and then you go down a level to portfolio or individual title level. You can activate them all with one click by clicking ‘Activate All’. In one click that’s has turned on all 157 titles in that package. What we generally assume with this type of package is that the name of the package tallies with what we have chosen to subscribe to, that all the titles of the package are listed in SFX under that package name and of course that all of the coverage information is correct.But there are times when we are subscribing to custom content or for whatever reason, it’s not the same on SFX as what we think we have or what a publisher tells us we have. And this is where Dataloader is an option, particularly if you are working with numbers over 200, 300 titles. There simply isn’t the time to check titles individually, when you think of the changes that happen annually in deals and moving walls and embargo periods. It’s fine to manual check say a closed package that is never going to be altered but otherwise it’s not feasible.
For Dataloader, you go back to the home screen and click Dataloader. What Dataloader allows you to do is upload a file to your SFX instance containing your local information, and Dataloader matches that with the SFX knowledgebase and carries out any commands that are contained in that file.First you select the package that you want to alter, find your file. The file is a just spreadsheet saved as a text file with the first column consisting of ISSNs or ISBNs and you can do various things with that list from there such as activate titles, change the coverage information, etc.It works very well for the most part but there is often a little manual work still to do because for different reasons it can’t find a particular match between the file and the SFX knowledgebase.[CLICK]
[CLICK]
So throughout I’ve talked in positive terms and perfect world scenario where everything works as you would expect. But I would hope that now you can see where errors can occur. These are some of the most commonly reported problems that we would deal with on our helpdesk, where SFX was concerned.[READ SLIDE]Expectations are raised now with the function offered by link resolvers ingrained in the research experience. And I do mean research with a small r as that pertains to anyone carrying out any level of research from your first year undergraduate all the way up to Vice Chancellors.When things go wrong, users are so used to it working, they don’t know what to do or where to go. They react as though everything’s broken: the library, the resource they’re trying to get to and this is really poor for everyone’s experience of all our services. They therefore are wary of the library, the publisher’s websites and the link resolver system.[CLICK]
Increasingly the content available via link resolver has not just increased in volume but also become more and more complex and I’m sure it’s been difficult for everyone involved in maintaining these services and systems. These are just some of the current and future issues that I’m aware of.[CLICK]
But thankfully there are current and future initiatives that should alleviate and hopefully finally resolve the problems experienced with using and managing link resolvers. The first is already there, KBART, a UKSG initiative and involves representatives from all in that stakeholder triangle I showed you above. I’m no KBART expert, so please look at their website shown here if you want more information. And I would particularly recommend that to any content providers/publishers in the room as it will expand on some of the potential pitfalls involved in participating with link resolvers.The second is URM systems or next generation library management systems. These aim to bring together all the disparate systems currently used by libraries to organise and manage their processes. Currently the acquisition of resources is in the traditional library management system and yet there is this other link resolver system with a knowledge base, that then have to marry up. Bringing them into one system again should hopefully alleviate a lot of duplication that currently occurs in the work of libraries in managing all their collections. So although I have highlighted some issues and problems, I hope I’ve also explained how important and useful link resolvers and that on the whole they work extremely well and now I’m also hopeful that in the future those final issues will be resolved. So on that I will finish and ask if their are any questions?[CLICK]