Starting from scratch – building the perfect digital repositoryVioleta Ilik
By establishing a digital repository on the Feinberg School of Medicine (FSM), Northwestern University, Chicago campus, we anticipate to gain ability to create, share, and preserve attractive, functional, and citable digital collections and exhibits. Galter Health Sciences Library did not have a repository as of November 2014. In just a few moths we formed a small team that was charged at looking to select the most suitable open source platform for our digital repository software. We followed the National Library of Medicine master evaluation criteria by looking at various factors that included: functionality, scalability, extensibility, interoperability, ease of deployment, system security, system, physical environment, platform support, demonstrated successful deployments, system support, strength of development community, stability of development organization, and strength of technology roadmap for the future. These factors are important for our case considering the desire to connect the digital repository with another platform that was an essential piece in the big FSM picture – VIVO. VIVO is a linked data platform that serves as a researchers’ hub and which provides the names of researchers from academic institutions along with their research output, affiliation, research overview, service, background, researcher’s identities, teaching, and much more.
Integrating with others: Stable VIVO URIs for local authority records; linkin...Violeta Ilik
Integrating with others: Stable VIVO URIs for local authority records; linking to VIAF; ORCID organizational identifiers; W3C Dataset ontology work by Melissa Haendel & Violeta Ilik, VIVO Implementation Fest, Durham NC, March 20, 2014
Distributed Person Data
Violeta Ilik, Digital Innovations Librarian, Northwestern University Feinberg School of Medicine, Galter Health Sciences Library, Chicago
What do MARC, RDF, and OWL have in common?Violeta Ilik
It is understood that in the current library ecosystem, catalogers must be willing to adapt to new semantic web environment while keeping in mind the crucial library mission – providing efficient access to information. How could catalogers transform their jobs in order to enable library users to retrieve information more effectively in the age of semantic web?
Researchers have argued that catalogers have the fundamental skills to successfully work with and repurpose the metadata originally created for use in traditional library systems by utilizing various programing languages. In the new environment their jobs will require new tools and new systems but the basic skills of organization of information, knowledge of commonly used access points, and an ever growing knowledge of information technology systems will still be the same. This presentation will stress the role of catalogers in bringing the data silos down, merging, augmenting, and creating interoperable data that can be used not just in library specific systems, but in various other systems. Catalogers’ indispensable knowledge of controlled vocabularies, authority aggregators, metadata creation, metadata reuse, taxonomies, and data stores makes it all possible.
We will demonstrate how catalogers’ knowledge can be leveraged to design an institutional repository and/or a researchers profiling system, create semantic web compliant data, create ontologies, utilize unique identifiers, and (re)use data from legacy systems.
It Takes a Village to Grow ORCIDs on Campus: Establishing and Integrating Uni...Violeta Ilik
This presentation describes the integration of ORCID identifiers into the open source Vireo electronic theses and dissertations (ETD) workflow, the university's digital repository, and the internally-used VIVO profile system.
Presented at Texas Conference on Digital Libraries (TCDL) 2014:
https://conferences.tdl.org/tcdl/index.php/TCDL/TCDL2014/schedConf/program
Starting from scratch – building the perfect digital repositoryVioleta Ilik
By establishing a digital repository on the Feinberg School of Medicine (FSM), Northwestern University, Chicago campus, we anticipate to gain ability to create, share, and preserve attractive, functional, and citable digital collections and exhibits. Galter Health Sciences Library did not have a repository as of November 2014. In just a few moths we formed a small team that was charged at looking to select the most suitable open source platform for our digital repository software. We followed the National Library of Medicine master evaluation criteria by looking at various factors that included: functionality, scalability, extensibility, interoperability, ease of deployment, system security, system, physical environment, platform support, demonstrated successful deployments, system support, strength of development community, stability of development organization, and strength of technology roadmap for the future. These factors are important for our case considering the desire to connect the digital repository with another platform that was an essential piece in the big FSM picture – VIVO. VIVO is a linked data platform that serves as a researchers’ hub and which provides the names of researchers from academic institutions along with their research output, affiliation, research overview, service, background, researcher’s identities, teaching, and much more.
Integrating with others: Stable VIVO URIs for local authority records; linkin...Violeta Ilik
Integrating with others: Stable VIVO URIs for local authority records; linking to VIAF; ORCID organizational identifiers; W3C Dataset ontology work by Melissa Haendel & Violeta Ilik, VIVO Implementation Fest, Durham NC, March 20, 2014
Distributed Person Data
Violeta Ilik, Digital Innovations Librarian, Northwestern University Feinberg School of Medicine, Galter Health Sciences Library, Chicago
What do MARC, RDF, and OWL have in common?Violeta Ilik
It is understood that in the current library ecosystem, catalogers must be willing to adapt to new semantic web environment while keeping in mind the crucial library mission – providing efficient access to information. How could catalogers transform their jobs in order to enable library users to retrieve information more effectively in the age of semantic web?
Researchers have argued that catalogers have the fundamental skills to successfully work with and repurpose the metadata originally created for use in traditional library systems by utilizing various programing languages. In the new environment their jobs will require new tools and new systems but the basic skills of organization of information, knowledge of commonly used access points, and an ever growing knowledge of information technology systems will still be the same. This presentation will stress the role of catalogers in bringing the data silos down, merging, augmenting, and creating interoperable data that can be used not just in library specific systems, but in various other systems. Catalogers’ indispensable knowledge of controlled vocabularies, authority aggregators, metadata creation, metadata reuse, taxonomies, and data stores makes it all possible.
We will demonstrate how catalogers’ knowledge can be leveraged to design an institutional repository and/or a researchers profiling system, create semantic web compliant data, create ontologies, utilize unique identifiers, and (re)use data from legacy systems.
It Takes a Village to Grow ORCIDs on Campus: Establishing and Integrating Uni...Violeta Ilik
This presentation describes the integration of ORCID identifiers into the open source Vireo electronic theses and dissertations (ETD) workflow, the university's digital repository, and the internally-used VIVO profile system.
Presented at Texas Conference on Digital Libraries (TCDL) 2014:
https://conferences.tdl.org/tcdl/index.php/TCDL/TCDL2014/schedConf/program
We describe current work in federating data from institutional research profiling systems – providing single-point
access to substantial numbers of investigators through concept-driven search, visualization of the relationships
among those investigators and the ability to interlink systems into a single information ecosystem.
Crediting informatics and data folks in life science teamsCarole Goble
Science Europe LEGS Committee: Career Pathways in Multidisciplinary Research: How to Assess the Contributions of Single Authors in Large Teams, 1-2 Dec 2015, Brussels
The People Behind Research Software crediting from the informatics, technical point of view
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
Scholars@Cornell: Visualizing the scholarly recordMuhammad Javed
As stewards of the scholarly record, Cornell University Library is developing a data and visualization service known as Scholars@Cornell with the goal of improving the visibility of Cornell research and enabling discovery of explicit and latent patterns of scholarly collaboration. We provide aggregate views of data where dynamic visualizations become the entry points into a rich graph of knowledge that can be explored interactively to answer questions such as: Who are the experts in what areas? Which departments collaborate with each other? What are patterns of interdisciplinary research? And more. Key components of the system are Symplectic Elements to provide automated citation feeds from external sources such as Web of Science, the Scholars "Feed Machine" that performs automated data curation tasks, and the VIVO semantic linked data store. The new "VIZ-VIVO" component bridges the chasm between the back-end of semantically rich data with a front-end user experience that takes advantage of new developments in the world of dynamic web visualizations. We will demonstrate a set of D3 visualizations that leverage relationships between people (e.g., faculty), their affiliations (e.g., academic departments), and published research outputs (e.g., journal articles by subject area). We will discuss our results with two of the initial pilot partners at Cornell University, the School of Engineering and the Johnson School of Management.
To facilitate data sharing from within the University of California system and beyond, the University of California Curation Center (UC3) is developing a new ingest and discovery layer for our data curation service, Dash. Dash uses the Merritt repository for preservation and a self-service overlay layer for submission and discovery of research datasets. The new overlay– dubbed Stash (STore And SHare)– will feature an enhanced user interface with a simple and intuitive deposit workflow, while still accommodating rich metadata. Stash will enable individual scholars to upload data through local file browse or drag-and-drop operation; describe data in terms of scientifically-meaning metadata, including methods, references, and geospatial information; identify datasets for persistent citation and retrieval; preserve and share data in an appropriate repository; and discover, retrieve, and reuse data through faceted search and browse. Stash can be implemented in conjunction with any standards-compliant repository that supports the SWORD protocol for deposit and the OAI-PMH protocol for metadata harvesting. Stash will feature native support for the DataCite or Dublin Core metadata schemas, but is designed to accommodate other schemas to support discipline-specific applications. By alleviating many of the barriers that have historically precluded wider adoption of open data principles, Stash empowers individual scholars to assert active curation control over their research outputs; encourages more widespread data preservation, publication, sharing, and reuse; and promotes open scholarly inquiry and advancement.
This presentation by Shana McDanold of Georgetown University was presented during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
Wikidata: Verifiable, Linked Open Knowledge That Anyone Can EditDario Taraborelli
Slides for my September 23 talk on Wikidata and WikiCite – NIH Frontiers in Data Science lecture series.
Persistent URL: https://dx.doi.org/10.6084/m9.figshare.3850821
Presented for managers & researchers at The Global One Health Initiative of the Ohio State University, Africa Regional Branch in Addis Ababa, Ethiopia (April 24th 2019)
Evolutionary & Swarm Computing for the Semantic WebAnkit Solanki
Semantic Web will be the next big thing in the world of internet. This presentation talks about various approaches that can be used to query the underlying triple store that has all the information.
EZID makes it simple for researchers and others to obtain and manage long-term identifiers for their digital content. The service can create and resolve identifiers, and it also allows entry and maintenance of information about the identifier (metadata). This presentation was given as part of a webinar series.
We describe current work in federating data from institutional research profiling systems – providing single-point
access to substantial numbers of investigators through concept-driven search, visualization of the relationships
among those investigators and the ability to interlink systems into a single information ecosystem.
Crediting informatics and data folks in life science teamsCarole Goble
Science Europe LEGS Committee: Career Pathways in Multidisciplinary Research: How to Assess the Contributions of Single Authors in Large Teams, 1-2 Dec 2015, Brussels
The People Behind Research Software crediting from the informatics, technical point of view
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
Scholars@Cornell: Visualizing the scholarly recordMuhammad Javed
As stewards of the scholarly record, Cornell University Library is developing a data and visualization service known as Scholars@Cornell with the goal of improving the visibility of Cornell research and enabling discovery of explicit and latent patterns of scholarly collaboration. We provide aggregate views of data where dynamic visualizations become the entry points into a rich graph of knowledge that can be explored interactively to answer questions such as: Who are the experts in what areas? Which departments collaborate with each other? What are patterns of interdisciplinary research? And more. Key components of the system are Symplectic Elements to provide automated citation feeds from external sources such as Web of Science, the Scholars "Feed Machine" that performs automated data curation tasks, and the VIVO semantic linked data store. The new "VIZ-VIVO" component bridges the chasm between the back-end of semantically rich data with a front-end user experience that takes advantage of new developments in the world of dynamic web visualizations. We will demonstrate a set of D3 visualizations that leverage relationships between people (e.g., faculty), their affiliations (e.g., academic departments), and published research outputs (e.g., journal articles by subject area). We will discuss our results with two of the initial pilot partners at Cornell University, the School of Engineering and the Johnson School of Management.
To facilitate data sharing from within the University of California system and beyond, the University of California Curation Center (UC3) is developing a new ingest and discovery layer for our data curation service, Dash. Dash uses the Merritt repository for preservation and a self-service overlay layer for submission and discovery of research datasets. The new overlay– dubbed Stash (STore And SHare)– will feature an enhanced user interface with a simple and intuitive deposit workflow, while still accommodating rich metadata. Stash will enable individual scholars to upload data through local file browse or drag-and-drop operation; describe data in terms of scientifically-meaning metadata, including methods, references, and geospatial information; identify datasets for persistent citation and retrieval; preserve and share data in an appropriate repository; and discover, retrieve, and reuse data through faceted search and browse. Stash can be implemented in conjunction with any standards-compliant repository that supports the SWORD protocol for deposit and the OAI-PMH protocol for metadata harvesting. Stash will feature native support for the DataCite or Dublin Core metadata schemas, but is designed to accommodate other schemas to support discipline-specific applications. By alleviating many of the barriers that have historically precluded wider adoption of open data principles, Stash empowers individual scholars to assert active curation control over their research outputs; encourages more widespread data preservation, publication, sharing, and reuse; and promotes open scholarly inquiry and advancement.
This presentation by Shana McDanold of Georgetown University was presented during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
Wikidata: Verifiable, Linked Open Knowledge That Anyone Can EditDario Taraborelli
Slides for my September 23 talk on Wikidata and WikiCite – NIH Frontiers in Data Science lecture series.
Persistent URL: https://dx.doi.org/10.6084/m9.figshare.3850821
Presented for managers & researchers at The Global One Health Initiative of the Ohio State University, Africa Regional Branch in Addis Ababa, Ethiopia (April 24th 2019)
Evolutionary & Swarm Computing for the Semantic WebAnkit Solanki
Semantic Web will be the next big thing in the world of internet. This presentation talks about various approaches that can be used to query the underlying triple store that has all the information.
EZID makes it simple for researchers and others to obtain and manage long-term identifiers for their digital content. The service can create and resolve identifiers, and it also allows entry and maintenance of information about the identifier (metadata). This presentation was given as part of a webinar series.
Apache Stanbol and the Web of Data - ApacheCon 2011Nuxeo
Presentation on Apache Stanbol (incubating) and related projects given by Olivier Grisel durin ApacheCon 2011.
More information:
- http://incubator.apache.org/stanbol/
- http://www.iks-project.eu
Delivered by Peter Burnhill, Director of EDINA, at the PRELIDA Consolidation and Dissemination workshop on 17/18 October 2014 (http://prelida.eu/consolidation-workshop).
Summary: The web changes over time, and significant reference rot inevitably occurs. Web archiving delivers only a 50% chance of success. So in addition to the original URI, the link should be augmented with temporal context to increase robustness.
Talk about Exploring the Semantic Web, and particularly Linked Data, and the Rhizomer approach. Presented August 14th 2012 at the SRI AIC Seminar Series, Menlo Park, CA
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Sectors of the Indian Economy - Class 10 Study Notes pdf
Modeling Data with Karma – Data Integration Tool
1. Modeling Data with Karma – Data Integration
Tool
prepared for the VIVO Apps & Tools Workshop
Violeta Ilik
Semantic Technologies Librarian
Texas A&M University
Austin, TX - August, 6 2014