Niall Beards presentation about the BiodiversityCatalogue and how it facilitates web service discoverability, its interaction with Taverna, and it's interoperability with the bio.tools registry.
BioSHaRE: Making data useful without direct sharing: Cafe Variome and Omics b...Lisette Giepmans
BioSHaRE conference July 28th, 2015, Milan - Latest tools and services for data sharing
Stream 1: Tools for data sharing analysis and enhancement
Café Variome is a highly flexible data discovery platform suitable for use with genomic data and/or phenotype data in settings such as diagnostic networks, disease consortia, biobanks and research communities. It enables users to search for the existence rather than the substance of datasets, and as part of this offers a complete suite of data discovery capabilities, focused on the data rather than metadata. Following data discovery, the system also facilitates controlled data sharing.
‘Café Variome Central’ aims to consolidate all publicly available genetic variants into one discovery portal through which to announce, discover and acquire a comprehensive listing of observed neutral and disease-causing gene variants. It employs publicly available web services to gather and make searchable a set of pointers to records of interest, to help users discover the existence of variant data and direct them to the original data sources where the data may be examined in full.
The software is in production as version 1.0 software, available presently for collaborative applications: http://www.cafevariome.org/
Café Variome can be installed stand-alone, or federated to allow searching across instances while the data remains at the source
OmicsConnect, underpinned by an ‘extended DAS’ (eDAS) protocol for data transfer, enables data feed into a genome browser tool from diverse sources and controlling which users should have access to which data sources and which data slices in those datasets.
DAS is a Extensible Markup Language (XML) communication protocol that allows a single client (e.g. a genome browser) to integrate information from multiple DAS servers dispersed around the world to present a unified view of data. The eDAS system brings many new advantages; the data are controlled by the content providers and can be modified, restricted and updated as required and the data are shared in a way that makes it easy for the end user to get information about specific regions, genes or markers without having to download and process entire datasets.
The latest version of OmicsConnect is
available for use under standard terms of academic collaboration:
http://omicsconnect.org
The tool is currently being improved for better adaptability and faster performance (fall 2015).
Contact info:
Prof. Anthony Brookes
University of Leicester
ajb97@leicester.ac.uk
Key words: genomics, genotype-phenotype, matchmaking, query-by-method apoi, rare disease, software
This document discusses challenges around scholarly data, including fragmented and poorly described data. It emphasizes the importance of experimental details, data availability, and data publication for reproducibility. Springer Nature's Scientific Data is highlighted as a new open-access journal for detailed data descriptors. The Scientific Data ISA-explorer is presented as a web application for discovering, exploring and visualizing data descriptors.
Increased access to the data generated is fuelling increased consumption and accelerating the cycle of discovery. But the successful integration and re-use of heterogeneous data from multiple providers and scientific domains is a major challenge within academia and industry, often due to incomplete description of the study details or metadata about the study. Using the BioSharing, ISA Commons and the STATistics Ontology (STATO) projects as exemplar community efforts, in this breakout session we will discuss the evolving portfolio of community-based standards and methods for structuring and curating datasets, from experimental descriptions to the results of analysis.
http://www.methodsinecologyandevolution.org/view/0/events.html#Data_workshop
Wrangling RedCap_An Introduction and InspirationJacqueline Stern
REDCap is a secure web application for building and managing online surveys and databases. It allows users to rapidly develop projects using either an online designer or by uploading a data dictionary template from Excel. Projects can include both surveys and databases. REDCap provides tools like branching logic, file uploading, scheduling, and exporting data to statistical software. The presentation provides examples of how REDCap has been used at Vanderbilt for projects involving training program tracking, appointment scheduling, and participant data collection. Users are encouraged to consider how REDCap could help with tasks requiring regular information gathering or projects with multiple steps and users.
Presentation given by Dr Xin-Yi Chua at the 'Sharing Health-y Data Workshop: Challenges and Solutions' event co-hosted by ANDS and HISA. Held on Wednesday 16th March 2016 at the Translational Research Institute, Brisbane, Australia.
How Accessible Is Our Collection? Performing an E-Resources Accessibility ReviewNASIG
Michael Fernandez, presenter
While the growth and adoption of electronic resources has been exponential, there has been a concurrent lag in ensuring that e-resources are accessible by users with disabilities. Vendors have become increasingly aware of this issue and are taking steps to address it; however, given the sheer size of the library marketplace, there is a noticeable lack of consistency across vendor platforms. In the Summer of 2016, American University Library began evaluating the accessibility of its web content as part of a university-wide initiative focusing on Section 508 compliance. This review entailed not only library hosted websites, but also third party platforms for databases, e-journals, and e-books. In order to assess the accessibility of the library’s subscribed e-resources, the Electronic Resources Management Unit created an accessibility inventory. All subscribed e-resources were evaluated to gauge the efforts being made by vendors to make their products accessible. The methodology for this inventory involved seeking out voluntary product accessibility templates (VPATs), identifying clearly marked accessibility statements on the vendor site or platform, and reviewing current license agreements for verbiage that ensures a commitment to accessibility regulations and allows for remediation of accessibility issues that may be identified. This inventory represented an initial but crucial step towards e-resource accessibility. AU Library was able to identify the vendors who have already taken measures, and for those who had not, we identified the opportunity to create a dialogue. In this presentation, I’ll detail methods and resources that can be used in order to assess the status of a collection’s accessibility. Additionally, I’ll describe how AU Library was able to collaborate on this shared goal by identifying allies across the university in the offices of assistive technology and procurement. Finally, I’ll discuss our strategies for further educating and engaging with vendors.
BioSHaRE: Making data useful without direct sharing: Cafe Variome and Omics b...Lisette Giepmans
BioSHaRE conference July 28th, 2015, Milan - Latest tools and services for data sharing
Stream 1: Tools for data sharing analysis and enhancement
Café Variome is a highly flexible data discovery platform suitable for use with genomic data and/or phenotype data in settings such as diagnostic networks, disease consortia, biobanks and research communities. It enables users to search for the existence rather than the substance of datasets, and as part of this offers a complete suite of data discovery capabilities, focused on the data rather than metadata. Following data discovery, the system also facilitates controlled data sharing.
‘Café Variome Central’ aims to consolidate all publicly available genetic variants into one discovery portal through which to announce, discover and acquire a comprehensive listing of observed neutral and disease-causing gene variants. It employs publicly available web services to gather and make searchable a set of pointers to records of interest, to help users discover the existence of variant data and direct them to the original data sources where the data may be examined in full.
The software is in production as version 1.0 software, available presently for collaborative applications: http://www.cafevariome.org/
Café Variome can be installed stand-alone, or federated to allow searching across instances while the data remains at the source
OmicsConnect, underpinned by an ‘extended DAS’ (eDAS) protocol for data transfer, enables data feed into a genome browser tool from diverse sources and controlling which users should have access to which data sources and which data slices in those datasets.
DAS is a Extensible Markup Language (XML) communication protocol that allows a single client (e.g. a genome browser) to integrate information from multiple DAS servers dispersed around the world to present a unified view of data. The eDAS system brings many new advantages; the data are controlled by the content providers and can be modified, restricted and updated as required and the data are shared in a way that makes it easy for the end user to get information about specific regions, genes or markers without having to download and process entire datasets.
The latest version of OmicsConnect is
available for use under standard terms of academic collaboration:
http://omicsconnect.org
The tool is currently being improved for better adaptability and faster performance (fall 2015).
Contact info:
Prof. Anthony Brookes
University of Leicester
ajb97@leicester.ac.uk
Key words: genomics, genotype-phenotype, matchmaking, query-by-method apoi, rare disease, software
This document discusses challenges around scholarly data, including fragmented and poorly described data. It emphasizes the importance of experimental details, data availability, and data publication for reproducibility. Springer Nature's Scientific Data is highlighted as a new open-access journal for detailed data descriptors. The Scientific Data ISA-explorer is presented as a web application for discovering, exploring and visualizing data descriptors.
Increased access to the data generated is fuelling increased consumption and accelerating the cycle of discovery. But the successful integration and re-use of heterogeneous data from multiple providers and scientific domains is a major challenge within academia and industry, often due to incomplete description of the study details or metadata about the study. Using the BioSharing, ISA Commons and the STATistics Ontology (STATO) projects as exemplar community efforts, in this breakout session we will discuss the evolving portfolio of community-based standards and methods for structuring and curating datasets, from experimental descriptions to the results of analysis.
http://www.methodsinecologyandevolution.org/view/0/events.html#Data_workshop
Wrangling RedCap_An Introduction and InspirationJacqueline Stern
REDCap is a secure web application for building and managing online surveys and databases. It allows users to rapidly develop projects using either an online designer or by uploading a data dictionary template from Excel. Projects can include both surveys and databases. REDCap provides tools like branching logic, file uploading, scheduling, and exporting data to statistical software. The presentation provides examples of how REDCap has been used at Vanderbilt for projects involving training program tracking, appointment scheduling, and participant data collection. Users are encouraged to consider how REDCap could help with tasks requiring regular information gathering or projects with multiple steps and users.
Presentation given by Dr Xin-Yi Chua at the 'Sharing Health-y Data Workshop: Challenges and Solutions' event co-hosted by ANDS and HISA. Held on Wednesday 16th March 2016 at the Translational Research Institute, Brisbane, Australia.
How Accessible Is Our Collection? Performing an E-Resources Accessibility ReviewNASIG
Michael Fernandez, presenter
While the growth and adoption of electronic resources has been exponential, there has been a concurrent lag in ensuring that e-resources are accessible by users with disabilities. Vendors have become increasingly aware of this issue and are taking steps to address it; however, given the sheer size of the library marketplace, there is a noticeable lack of consistency across vendor platforms. In the Summer of 2016, American University Library began evaluating the accessibility of its web content as part of a university-wide initiative focusing on Section 508 compliance. This review entailed not only library hosted websites, but also third party platforms for databases, e-journals, and e-books. In order to assess the accessibility of the library’s subscribed e-resources, the Electronic Resources Management Unit created an accessibility inventory. All subscribed e-resources were evaluated to gauge the efforts being made by vendors to make their products accessible. The methodology for this inventory involved seeking out voluntary product accessibility templates (VPATs), identifying clearly marked accessibility statements on the vendor site or platform, and reviewing current license agreements for verbiage that ensures a commitment to accessibility regulations and allows for remediation of accessibility issues that may be identified. This inventory represented an initial but crucial step towards e-resource accessibility. AU Library was able to identify the vendors who have already taken measures, and for those who had not, we identified the opportunity to create a dialogue. In this presentation, I’ll detail methods and resources that can be used in order to assess the status of a collection’s accessibility. Additionally, I’ll describe how AU Library was able to collaborate on this shared goal by identifying allies across the university in the offices of assistive technology and procurement. Finally, I’ll discuss our strategies for further educating and engaging with vendors.
From the ORCID Outreach Meeting, May 21-22, 2014, held in Chicago, Illinois, USA. https://orcid.org/content/orcid-outreach-meeting-and-codefest-may-2014
Best practices in the creation of ORCID identifiers for faculty, staff, and students: technical integration
Research organizations are creating ORCID iDs and integrating them into a variety of systems, from personnel databases, to directories, repositories, and university presses. In this session, organizations will share information and strategies on technical aspects of working with ORCID APIs, strategies for modifying internal systems to capture and store ORCID iDs, and interactions with other identifiers.
Moderator: Simeon Warner, Research Associate, Cornell University
Presenters:
Urban Andersson, IT Librarian, Chalmers University of Technology
Peter Flynn, Lead Developer, Boston University
James Creel, Senior Lead Software Applications Developer, Texas A&M University
REDCap is an electronic data capture system that gives users control over their data and is more powerful, flexible and secure than other options. It securely stores data in encrypted databases and has various access controls, passwords, and logging features to track user activity. REDCap automatically logs all user actions to allow administrators to review the activity and data accessed by any given user. While it has strong security, some audit departments may see disadvantages compared to paper-based systems. The document provides instructions on registering and logging into REDCap for a clinical trial launch event.
This document summarizes efforts to publish clinical quality data from health.data.gov as linked open data. It describes releasing metadata and data from the Hospital Compare project as RDF using vocabularies like VoID, FOAF and DC. Tools like Google Refine, Top Braid Composer and Virtuoso were used to transform, model and serve the data. A community of practice seeks to evolve standards and share best practices for publishing government linked data.
This document discusses REDCap, a web-based application for researchers and clinicians to create and manage databases and surveys. It describes REDCap features such as different data field types, data validation, user access controls, data export capabilities, and support for multi-center and longitudinal research. The document also provides statistics on REDCap usage at Kasr Al Ainy, including 46 projects, 114 users across several departments, and offers a live demo of the REDCap system and its features.
This presentation was provided by Tina Feick of North America, HARRASSOWITZ Booksellers & Subscription Agents, and The I2 Working Group, during the NISO Update, held at ALA Annual on June 29th, 2008.
Accessibility Compliance: One State, Two ApproachesNASIG
Accessibility compliance is a growing concern for academic institutions as it pertains to instructional materials on websites, course management systems, and in course documents. This extends to materials provided by academic libraries such as electronic resources. This presentation will discuss the approaches that both systems governing Tennessee public colleges and universities are using to ensure that vendors are compliant with standards as described in WCAG 2.0, EPUB 3, and Section 508 of the Rehabilitation Act of 1973.
The session will be divided into three parts as follows:
Introduction to the difference between accessibility and accommodation. Discussion of the types of disabilities of which librarians should be aware when acquiring and assessing different electronic resources. Brief mention of the laws and standards related to accessibility compliance.
An overview of the University of Tennessee System’s approach to encouraging accessibility compliance by incorporating detailed conformance language into licenses with the vendors and publishers of electronic and information technology.
A discussion of the Tennessee Board of Regents system’s approach to encouraging accessibility compliance by conducting an accessibility audit of resources held in common among the system’s libraries and through a collaborative process of compliance document collection from vendors/publishers and sharing in an AIMT (Accessible Instructional Materials and Technology) database. An introduction to the different types of documents and their content: Accessibility Statement, Voluntary Product Accessibility Template (VPAT), WCAG 2.0 (Web Content Accessibility Guidelines) Checklist, EPUB 3 Accessibility Checklist, and a Conformance and Remediation Form.
Stephanie J. Adams
Electronic Resources Librarian, Tennessee Tech University
Ms. Adams is the Electronic Resources Librarian at Tennessee Tech University where she is responsible for the acquisition and set-up of all electronic resources at the Volpe Library.
Corey S. Halaychik
The University of Tennessee, Knoxville
Licensing guy, negotiator of master agreements at the University of Tennessee Libraries, and co-chair of The Collective, I work to make libraries more efficient, saving time and money for institutions and the people they serve.
Jennifer Mezick
Pellissippi State Community College
Acquisitions and Collection Development Librarian at Pellissippi State Community College in Knoxville, TN. In addition to these roles, I manage the libraries' electronic resources and website, and provide instruction and research support to students and faculty.
Webservices and Workflows. Taverna, Biocatalgue and myExperiment.Rafael C. Jimenez
This document summarizes web services and workflow solutions from myGrid, including Biocatalogue, Taverna, and myExperiment. Biocatalogue is a registry of life science web services that allows users to register, discover, and curate services. Taverna is a workflow management system that allows users to assemble workflows using available services from Biocatalogue and myExperiment. MyExperiment is a site for sharing, discovering, and reusing workflows. It supports reuse and repurposing of workflows across different scientific domains.
Persist Ecore models to RDF, or use Active Objects and Reflection to hold active object state in RDF. These slides were presented as a lightning talk at Code Generation 2013 <http: />.
Automatic Metadata Generation Charles DuncanJISC CETIS
Slides by Charles Duncan summarising the findings of the automatic metadata generation use cases project, see http://www.intrallect.com/wiki/index.php/AMG-UC
The adoption of ORCID identifiers by funding organizationsORCID, Inc
The document discusses options for comprehensively tracking NIH-funded researchers. It proposes:
1) Extending the requirement for NIH eRA Commons accounts to all students and postdocs on NIH projects to capture data in annual reports.
2) Automating NIH Research Training Tables to prepopulate with existing data and share with program directors.
3) Enhancing the SciENcv online biosketch tool to auto-populate biosketches, link researchers to grants and publications using ORCID IDs, and eventually replace uploaded PDF biosketches.
4) Requiring ORCID IDs to help identify individual researchers and their contributions across different systems and agencies.
This presentation was provided by Tina Feick of North America, HARRASSOWITZ Booksellers & Subscription Agents, and The I2 Working Group, during the NISO/BISG Forum: The Changing Standards Landscape: Creative Solutions to Your Information Problems, held at ALA Annual on June 27th, 2008.
Presentation on how Crossref's REST API can be used to get the full text of publisher content for the purpose of TDM. From Crossref LIVE in Brazil, Dec 2016.
New Initiatives - Geoffrey Bilder - London LIVE 2017Crossref
Presentation by Geoffrey Bilder at Crossref London LIVE, 26th September 2017. New initiatives at Crossref including organisational and grant identifiers.
This document summarizes Bethany Greene's investigation of perpetual access provisions for e-resources at UNC-Chapel Hill as a graduate student. She compiled data from license agreements and ran title lists against the Keepers Registry to determine what percentage of e-resources have perpetual access and what form it takes. Her results found that 15% of titles had no third-party archiving, 9% had access on local hard drives or media, and 26% were thought to be archived but actually were not.
We describe current work in federating data from institutional research profiling systems – providing single-point
access to substantial numbers of investigators through concept-driven search, visualization of the relationships
among those investigators and the ability to interlink systems into a single information ecosystem.
This document outlines an agenda for a webinar on how universities and funding organizations are using ORCID (Open Researcher and Contributor ID) to identify early career researchers. The webinar covers an introduction to ORCID, how the National Institutes of Health uses ORCID identifiers, challenges and benefits of ORCID for early career researchers, and encouraging graduate students and postdocs to adopt ORCID iDs. Presenters are from ORCID, NIH, Harvard University and Texas A&M University.
Measure Twice and Cut Once: How a Budget Cut Impacted Subscription Renewals f...NASIG
Speakers: Ilda Cardenas, Keri Prelitz, Greg Yorba
The process of looking at subscriptions with the goal of proactively downsizing revealed that the library’s existing renewal workflows were outdated and in need of regular analysis to identify underused resources. Additionally, this project uncovered shortcomings of analysis that is reliant on usage data, the unexpected ramifications of large-scale subscription cancellations, as well as the need for improved communication within and between the many library departments affected by subscription cancellations.
O documento conta a história de uma senhora idosa chamada Dona Otília que comprou sardinhas no mercado. Quando voltou para casa para cozinhar as sardinhas, o seu gato Micas roubou-as enquanto ela atendia o telefone. Quando Dona Otília descobriu, ficou furiosa com Micas, que fugiu para escapar da sua raiva.
From the ORCID Outreach Meeting, May 21-22, 2014, held in Chicago, Illinois, USA. https://orcid.org/content/orcid-outreach-meeting-and-codefest-may-2014
Best practices in the creation of ORCID identifiers for faculty, staff, and students: technical integration
Research organizations are creating ORCID iDs and integrating them into a variety of systems, from personnel databases, to directories, repositories, and university presses. In this session, organizations will share information and strategies on technical aspects of working with ORCID APIs, strategies for modifying internal systems to capture and store ORCID iDs, and interactions with other identifiers.
Moderator: Simeon Warner, Research Associate, Cornell University
Presenters:
Urban Andersson, IT Librarian, Chalmers University of Technology
Peter Flynn, Lead Developer, Boston University
James Creel, Senior Lead Software Applications Developer, Texas A&M University
REDCap is an electronic data capture system that gives users control over their data and is more powerful, flexible and secure than other options. It securely stores data in encrypted databases and has various access controls, passwords, and logging features to track user activity. REDCap automatically logs all user actions to allow administrators to review the activity and data accessed by any given user. While it has strong security, some audit departments may see disadvantages compared to paper-based systems. The document provides instructions on registering and logging into REDCap for a clinical trial launch event.
This document summarizes efforts to publish clinical quality data from health.data.gov as linked open data. It describes releasing metadata and data from the Hospital Compare project as RDF using vocabularies like VoID, FOAF and DC. Tools like Google Refine, Top Braid Composer and Virtuoso were used to transform, model and serve the data. A community of practice seeks to evolve standards and share best practices for publishing government linked data.
This document discusses REDCap, a web-based application for researchers and clinicians to create and manage databases and surveys. It describes REDCap features such as different data field types, data validation, user access controls, data export capabilities, and support for multi-center and longitudinal research. The document also provides statistics on REDCap usage at Kasr Al Ainy, including 46 projects, 114 users across several departments, and offers a live demo of the REDCap system and its features.
This presentation was provided by Tina Feick of North America, HARRASSOWITZ Booksellers & Subscription Agents, and The I2 Working Group, during the NISO Update, held at ALA Annual on June 29th, 2008.
Accessibility Compliance: One State, Two ApproachesNASIG
Accessibility compliance is a growing concern for academic institutions as it pertains to instructional materials on websites, course management systems, and in course documents. This extends to materials provided by academic libraries such as electronic resources. This presentation will discuss the approaches that both systems governing Tennessee public colleges and universities are using to ensure that vendors are compliant with standards as described in WCAG 2.0, EPUB 3, and Section 508 of the Rehabilitation Act of 1973.
The session will be divided into three parts as follows:
Introduction to the difference between accessibility and accommodation. Discussion of the types of disabilities of which librarians should be aware when acquiring and assessing different electronic resources. Brief mention of the laws and standards related to accessibility compliance.
An overview of the University of Tennessee System’s approach to encouraging accessibility compliance by incorporating detailed conformance language into licenses with the vendors and publishers of electronic and information technology.
A discussion of the Tennessee Board of Regents system’s approach to encouraging accessibility compliance by conducting an accessibility audit of resources held in common among the system’s libraries and through a collaborative process of compliance document collection from vendors/publishers and sharing in an AIMT (Accessible Instructional Materials and Technology) database. An introduction to the different types of documents and their content: Accessibility Statement, Voluntary Product Accessibility Template (VPAT), WCAG 2.0 (Web Content Accessibility Guidelines) Checklist, EPUB 3 Accessibility Checklist, and a Conformance and Remediation Form.
Stephanie J. Adams
Electronic Resources Librarian, Tennessee Tech University
Ms. Adams is the Electronic Resources Librarian at Tennessee Tech University where she is responsible for the acquisition and set-up of all electronic resources at the Volpe Library.
Corey S. Halaychik
The University of Tennessee, Knoxville
Licensing guy, negotiator of master agreements at the University of Tennessee Libraries, and co-chair of The Collective, I work to make libraries more efficient, saving time and money for institutions and the people they serve.
Jennifer Mezick
Pellissippi State Community College
Acquisitions and Collection Development Librarian at Pellissippi State Community College in Knoxville, TN. In addition to these roles, I manage the libraries' electronic resources and website, and provide instruction and research support to students and faculty.
Webservices and Workflows. Taverna, Biocatalgue and myExperiment.Rafael C. Jimenez
This document summarizes web services and workflow solutions from myGrid, including Biocatalogue, Taverna, and myExperiment. Biocatalogue is a registry of life science web services that allows users to register, discover, and curate services. Taverna is a workflow management system that allows users to assemble workflows using available services from Biocatalogue and myExperiment. MyExperiment is a site for sharing, discovering, and reusing workflows. It supports reuse and repurposing of workflows across different scientific domains.
Persist Ecore models to RDF, or use Active Objects and Reflection to hold active object state in RDF. These slides were presented as a lightning talk at Code Generation 2013 <http: />.
Automatic Metadata Generation Charles DuncanJISC CETIS
Slides by Charles Duncan summarising the findings of the automatic metadata generation use cases project, see http://www.intrallect.com/wiki/index.php/AMG-UC
The adoption of ORCID identifiers by funding organizationsORCID, Inc
The document discusses options for comprehensively tracking NIH-funded researchers. It proposes:
1) Extending the requirement for NIH eRA Commons accounts to all students and postdocs on NIH projects to capture data in annual reports.
2) Automating NIH Research Training Tables to prepopulate with existing data and share with program directors.
3) Enhancing the SciENcv online biosketch tool to auto-populate biosketches, link researchers to grants and publications using ORCID IDs, and eventually replace uploaded PDF biosketches.
4) Requiring ORCID IDs to help identify individual researchers and their contributions across different systems and agencies.
This presentation was provided by Tina Feick of North America, HARRASSOWITZ Booksellers & Subscription Agents, and The I2 Working Group, during the NISO/BISG Forum: The Changing Standards Landscape: Creative Solutions to Your Information Problems, held at ALA Annual on June 27th, 2008.
Presentation on how Crossref's REST API can be used to get the full text of publisher content for the purpose of TDM. From Crossref LIVE in Brazil, Dec 2016.
New Initiatives - Geoffrey Bilder - London LIVE 2017Crossref
Presentation by Geoffrey Bilder at Crossref London LIVE, 26th September 2017. New initiatives at Crossref including organisational and grant identifiers.
This document summarizes Bethany Greene's investigation of perpetual access provisions for e-resources at UNC-Chapel Hill as a graduate student. She compiled data from license agreements and ran title lists against the Keepers Registry to determine what percentage of e-resources have perpetual access and what form it takes. Her results found that 15% of titles had no third-party archiving, 9% had access on local hard drives or media, and 26% were thought to be archived but actually were not.
We describe current work in federating data from institutional research profiling systems – providing single-point
access to substantial numbers of investigators through concept-driven search, visualization of the relationships
among those investigators and the ability to interlink systems into a single information ecosystem.
This document outlines an agenda for a webinar on how universities and funding organizations are using ORCID (Open Researcher and Contributor ID) to identify early career researchers. The webinar covers an introduction to ORCID, how the National Institutes of Health uses ORCID identifiers, challenges and benefits of ORCID for early career researchers, and encouraging graduate students and postdocs to adopt ORCID iDs. Presenters are from ORCID, NIH, Harvard University and Texas A&M University.
Measure Twice and Cut Once: How a Budget Cut Impacted Subscription Renewals f...NASIG
Speakers: Ilda Cardenas, Keri Prelitz, Greg Yorba
The process of looking at subscriptions with the goal of proactively downsizing revealed that the library’s existing renewal workflows were outdated and in need of regular analysis to identify underused resources. Additionally, this project uncovered shortcomings of analysis that is reliant on usage data, the unexpected ramifications of large-scale subscription cancellations, as well as the need for improved communication within and between the many library departments affected by subscription cancellations.
O documento conta a história de uma senhora idosa chamada Dona Otília que comprou sardinhas no mercado. Quando voltou para casa para cozinhar as sardinhas, o seu gato Micas roubou-as enquanto ela atendia o telefone. Quando Dona Otília descobriu, ficou furiosa com Micas, que fugiu para escapar da sua raiva.
06 days mystical kerala backwaters tourTravel Astu
This 6-day tour of Kerala, India includes visits to historic sites in Cochin, wildlife viewing at Periyar Lake in Thekkady, spice plantations, tea estates, and a overnight cruise through the scenic backwaters of Kerala on a houseboat. The itinerary includes sightseeing in Cochin, a drive through tea gardens to Thekkady, nature treks and elephant rides in Thekkady, visiting cardamom and pepper plantations, an overnight cruise on a houseboat through the backwaters, and a final drive back to Cochin.
This document reflects on an experience, with sections discussing the author's thoughts and feelings of being excited and nervous, challenges of cold weather and grades, joys of making friends and hard work paying off, and a lesson learned about time management. The document ends by telling the reader the presentation is over.
Roni Storjohann is seeking a position with a Bachelor's degree in Management Information Systems from the University of Northern Iowa. She has a 3.98 GPA and was on the Dean's list. Previously, she attended Scott Community College and graduated magna cum laude with an Associate's degree. Her work experience includes customer service roles at Hy-Vee and as a swimming instructor. She also coached youth soccer and served as captain of the women's soccer team at Scott Community College where she received academic and athletic honors.
This rubric outlines criteria for assessing student portfolios. It evaluates portfolios based on organization, documentation of the learning process, demonstration of skills and growth, and reflection. Portfolios are scored on a scale of 1 to 4 in each category, with 4 being the highest score. The assessment focuses on helping students learn and improve.
Speech refers to the physical ability to produce sounds, language is a system of symbols and rules for communication, and communication is the exchange of thoughts and information between individuals. This document outlines the learning outcome of differentiating these three concepts from a psychological perspective for a psycholinguistics class. It provides the name of the student, professor, semester, location and timeframe for the class.
Reforzar la capacidad de abstracción, trabajando de una forma paulatina, que va con elementos concretos (dibujos), al no depender de ellos para lograr la operación básica de matemática, la sustracción.
The initial inlay design took candid photos from the music video filming without the band's knowledge and printed and ripped them up to resemble indie CD designs seen before as added extras for buyers. To better link the inlay to the final cover, the background color needed changing. The partner designed the inlay sleeves while providing feedback and suggestions to improve each other's work, with the document author having input on the border color which was later changed based on target audience feedback.
Dokumen tersebut merangkum tentang Bumi, dimulai dari penjelasan bahwa Bumi adalah planet yang mengelilingi Matahari dalam waktu 365 hari. Dokumen juga menjelaskan tentang rotasi dan revolusi Bumi serta pengaruhnya terhadap pergantian musim. Selanjutnya dibahas tentang lapisan-lapisan Bumi mulai dari inti, kerak, air, dan udara.
500 Kobe Pre-Accelerator Demo Day >> Shizencyokuhan500 Startups
This document outlines information about a natural cultivation farming organization in Japan called 500 Kobe Batch, led by CEO Hideaki Takahashi. It describes the organization's mission of connecting natural cultivation farmers to sell their pesticide-free and fertilizer-free products locally. The organization has grown to over $10,000 in monthly sales through an online ordering and communication platform, serving more than 90 registered farmers across 18 prefectures in the six months since launching in February 2016.
- Web scale discovery services provide a single search box to search across a library's subscribed resources including journals, books, databases, and more. They index these resources upfront to provide fast search results compared to federated search which searches resources individually.
- Key parameters for evaluating discovery services include coverage, relevance ranking methodology, metadata quality, search refinement options, value-added features, and customer support. Subject indexing can be improved through "platform blending" which leverages subject indexes from databases.
- User studies have shown discovery services can improve search effectiveness for users compared to individual library databases or Google Scholar. Local support from the discovery service provider is important.
The document provides an overview of the National Center for Biomedical Ontology (NCBO) technology including REST web services, the BioPortal ontology repository, NCBO web services, and the BioPortal SPARQL endpoint. Key NCBO web services allow users to search ontologies, access ontology terms and hierarchies, propose term annotations, map between ontologies, and annotate data with ontology terms. The document outlines several NCBO tools and resources available for working with biomedical ontologies.
Cape Town - Bioschemas workshop before the Bioinformatics Education Summit.
Explains schema.org, Bioschemas, TeSS Case study, and the tools and implementation techniques adopters can use
Crossref for Ambassadors - Introductory webinarCrossref
This document provides an overview and agenda for a Crossref presentation. It introduces Crossref, its mission to make research outputs easy to find and link, and its role in tagging metadata and building infrastructure for scholarly communications. The presentation agenda covers Crossref services, members, content types, focus for 2018, and new developments like Event Data and tools to help members. It also provides links for brand guidelines, communications contacts, and product support.
This document provides an overview and agenda for a Crossref presentation. It introduces Crossref, discusses its history and mission to make research outputs easy to find and link. It outlines Crossref's focus for 2018 on strengthening community links and improving metadata. The presentation describes Crossref's services including reference linking, funding data, Crossmark, and similarity check. It also discusses new developments like event data and collaboration with OJS. Contact information and links are provided for further information.
BioCatalogue talk by Carole Goble. She outlines in these slides the reasons behind the BioCatalogue project. And present the BioCatalogue and its goals.
This document discusses linking services and data on the web. It notes that while semantic web service ontologies were proposed, they failed to gain adoption. Web APIs have become more widely used as they are public, reusable, and have business models. However, their semantics and data formats are often unclear. The document proposes "linked services" - services described as linked data to provide reusable functionality for linked data applications. It presents tools and infrastructure to support finding, composing, and invoking linked services. Linked services could help make traditional services more accessible and applicable by expressing them using existing web vocabularies.
The document discusses best practices for designing web services. It covers using HTTP as a protocol, different service types like SOAP, XML-RPC and REST, and considerations for designing APIs like making them stateless, versioning, error handling and authentication. The document emphasizes keeping services small, consistent and well documented with examples to empower users.
Architect’s Open-Source Guide for a Data Mesh ArchitectureDatabricks
Data Mesh is an innovative concept addressing many data challenges from an architectural, cultural, and organizational perspective. But is the world ready to implement Data Mesh?
In this session, we will review the importance of core Data Mesh principles, what they can offer, and when it is a good idea to try a Data Mesh architecture. We will discuss common challenges with implementation of Data Mesh systems and focus on the role of open-source projects for it. Projects like Apache Spark can play a key part in standardized infrastructure platform implementation of Data Mesh. We will examine the landscape of useful data engineering open-source projects to utilize in several areas of a Data Mesh system in practice, along with an architectural example. We will touch on what work (culture, tools, mindset) needs to be done to ensure Data Mesh is more accessible for engineers in the industry.
The audience will leave with a good understanding of the benefits of Data Mesh architecture, common challenges, and the role of Apache Spark and other open-source projects for its implementation in real systems.
This session is targeted for architects, decision-makers, data-engineers, and system designers.
The arrival and enormous growth rate of digital contents have fundamentally changed the way in which content is made available to library users. In the recent years, libraries are acquiring more and more electronic resources (e-resources) because of perceived benefits, such as easy access to information and its comprehensiveness. Due to the influx of e-resources in libraries, the collection, acquisition, and maintenance of these resources have become complicated issues to deal with. This has forced libraries to devise strategies to manage and deliver e-resources conveniently. Therefore, “Management of E-resources” or “Electronic Resource Management” (ERM) has become a challenge for library professionals that needs to be addressed through research and practice. To meet these challenges, library professionals and content providers have decided to develop ‘Electronic Resource Management System’ (ERMS) for management of e-resources in a more systematic way.
Webinar held 6 October 2020.
The webinar is relevant for new and existing Crossref members, publishers, editors, researchers, service
providers, hosting platforms, funders, librarians; really anyone interested in finding out a bit more about what
Crossref is and does.
This webinar covers:
• How to register content with Crossref
• How to make updates to your metadata in order to make changes, corrections, or to add more detail
• Participation reports
• Additional services and where to find help.
Sessions presented in English by Crossref staff.
The webinar held 6 October 2020.
The webinar is relevant for new and existing Crossref members, publishers, editors, researchers, service
providers, hosting platforms, funders, librarians; really anyone interested in finding out a bit more about what
Crossref is and does.
This webinar covers:
• How to register content with Crossref
• How to make updates to your metadata in order to make changes, corrections, or to add more detail
• Participation reports
• Additional services and where to find help.
Sessions presented in English by Crossref staff.
There is a growing trend towards a consolidation of services for Electronic Resource Management (ERM), A-Z journal listings, full text link resolving and discovery services under a single service provider. In many cases, the adoption of a discovery service from a provider that is not the same as the libraries' existing link resolver service means managing multiple knowledgebases. In this session, 3 libraries will provide an overview of their experience and strategies for maintaining separate link resolving and discovery services in lieu of adopting a full suite of services from a single service provider. Each speaker will provide a case study on the advantages and/or challenges of managing their chosen discovery service, EBSCO's EDS, Ex Libris' Primo and ProQuest's Summon, in conjunction with the CUFTS/GODOT open source knowledgebase/link resolver.
Presenters:
Leanna Jantzi, Electronic Resources Copyright Librarian, Capilano University
Jennifer Richard, Academic Librarian, Acadia University
andra Wong, Electronic Resources Librarian, Simon Fraser University
ChemSpider – disseminating data and enabling an abundance of chemistry platformsKen Karapetyan
ChemSpider is one of the chemistry community’s primary public compound databases. Containing tens of millions of chemical compounds and its associated data ChemSpider serves data to many tens of websites and software applications at this point. This presentation will provide an overview of the expanding reach of the ChemSpider platform and the nature of solutions that it helps to enable. We will also discuss some of the future directions for the project that are envisaged and how we intend to continue expanding the impact for the platform.
Towards Semantic APIs for Research Data Services (Invited Talk)Anna Fensel
Rapid development of Internet and Web technology is changing the state of the art in communication of knowledge, or results of research activities. Particularly, Semantic technology, linked and open data become key enablers for successful and efficient progress in research. At first, I define the research data service (RDS) and discuss typical current and possible future usage scenarios involving RDS. Further, I discuss the state of the art in the areas of semantic service and data annotation and API construction, as well as infrastructural solutions, applicable for RDS realisation. At last, innovative methods of online dissemination, promotion and efficient communication of research are discussed.
The document discusses how to automatically apply a taxonomy to content using Machine Aided Indexing (MAI). It covers the expected organizational changes, how to get people to embrace the taxonomy, and where MAI is currently being used. It then describes how MAI works, including the modules, syntax, process, and how its accuracy is evaluated using statistics. Rules for MAI include simple, complex, proximity, location, and format conditions.
Ordering the chaos: Creating websites with imperfect dataAndy Stretton
The document discusses strategies for dealing with messy and imperfect data when creating websites. It describes how the Chembio Hub uses techniques like automatically tagging untagged data using significant terms analysis in Elasticsearch and creating database views to normalize different schemas. Filling gaps in tagging by querying search engines and considering flat file databases are also proposed. The goal is to enable sharing of chemical and biological research data across Oxford departments in a sustainable way without requiring perfect data formats or extensive curation.
Crossref Community Manager, Vanessa Fairhurst, talks about the range of services Crossref offers to support and enhance scholarly research communications. Services include Reference Linking, Cited-By, the Funder Registry, Event Data, Crossmark and Similarity Check. Information on how to get further information and support is also available at the end of the presentation.
NHSPUG June 2015 - Must Love Term Sets: The New and Improved Managed Metadat...Jonathan Ralton
The document is a presentation on managed metadata in SharePoint 2013. It discusses the new managed metadata service, term store, and content type syndication features. The presentation provides an overview of these new features, including improvements to the user interface for managing terms, support for multi-lingual terms, managed navigation, hashtags, and the taxonomy API. It also discusses considerations for using term sets, columns, and content type publishing across sites.
Similar to RDA Web service discoverability workshop (20)
The ELIXIR Implementation study TeSS yielded a Javascript application called Concept Maps. The idea is to abstract the typical steps taken in a data analysis workflow into EDAM Operation and Data nodes, and connect these abstract steps with narrative text, available tools, and training resources.
Bioschemas Adoption Meeting: Training materials and EventsNiall Beard
Presentation about Training materials and events specifications at the Bioschemas Adoption Meeting on 2nd October 2017. Presentation given by Niall Beard. Shows the aggregation use case of Bioschemas - showing adoption of the schema.org and a new tool for importing schema.org annotations
This presentation describes TeSS, the motivations behind it, some of it's many features and looks at some of the ongoing and future developments and activities we're working towards. Additionally there are some slides on Bioschemas at the end
schema.org - Simple Structured Data for the WebNiall Beard
Bioschemas lightning talk for Collaborations Workshop 2017 in Leeds. Find out about how to take advantage of lightweight schema.org markup to enhance the discoverability of your web data through search engines and by aggregators
This presentation shows you what you can do with TeSS to find training resource, both events and materials. This could be for someone looking for training resources to learn new skills, or trainers to find materials to build on for delivering their own courses.
It then talks about how you can integrate TeSS content into your own websites using several plugins, APIs, and links.
And finally, how do you get your own materials and events registered in TeSS
This presentation was recorded as part of the ELIXIR Webinar series and can be viewed at http://www.elixir-europe.org/documents/elixir-webinar%3A-introducing-tess-february-2017 - The presentation covers an introduction to the ELIXIR Training platform TeSS - some of it's features, functions and future works. It also has a whistle-top tour of Bioschemas - the group developing schema.org for the Life sciences.
ELIXIR TeSS And Bioschemas: An aggregated portal and an aggregation tool Niall Beard
Webinar by Niall Beard about the aggregated training platform TeSS, and the schema.org working group Bioschemas. The presentation describes the need for TeSS, many of it's features and a look into the difficulties aggregating ANY data online. We go on to the solution of using schema.org as a lightweight method for structuring data. This talk was for the FAIRDOM webinar series. A live recording of it can be found here>
TeSS: ELIXIR Training Portal (Eubic Winter School 2017)Niall Beard
EuBiC Proteomics winter school. Presentation about ELIXIR training platform and the TeSS training portal for the Training Workflows workshop given on Monday 10th Jan 2017
Bioschemas for Aggregating ELIXIR Events - Comms WebinarNiall Beard
This document summarizes TeSS, a tool for aggregating and registering training events and materials for ELIXIR. TeSS allows users to search, filter, and discover training events and organize them into packages and workflows. Content from various sites can be distributed via TeSS by marking it up with schema.org tags, which improves search engine optimization. TeSS will also track ELIXIR training metrics and activities to prevent duplicate data entry.
Tess training coordinators meeting. New features including iAnn handover, event subscriptions and curation tools. Plus discussion into how to curate content in TeSS
Presentation given by Niall Beard in November 2016 at Rothamsted research centre. The presentation is a use case overview of bioschemas (schema.org for science) and how it can be used to structure data included in aggregated registries. The registry is TeSS (https://tess.elixir-uk.org) which collects training events and materials from a distributed heterogeneous collection of sources
Bioschemas presentation at ECCB 2016, The HagueNiall Beard
Bioschemas is an open community initiative that aims to improve data interoperability in the life sciences. It does so by encouraging life scientists to use schema.org mark-up, so that their websites and services contain consistently structured information. This makes it easier to discover, collate and analyse distributed data. The main outcome of Bioschemas is a collection of specifications that provide guidelines to facilitate a more consistent adoption of schema.org mark-up within the life sciences.
This document provides an overview of ELIXIR, a European infrastructure for biological information. ELIXIR aims to coordinate life science data infrastructure across Europe to support data sharing and interoperability. It has 17 member countries and 2 observer countries. ELIXIR focuses on areas like metadata standards, APIs, identifiers, and secure access to data. It also promotes principles like findability, accessibility, interoperability, and reusability of data. The document discusses ELIXIR's role in supporting areas like marine metagenomics, human data, crops/plants, and rare diseases. It notes that ELIXIR's hub is located alongside the European Molecular Biology Laboratory European Bioinformatics Institute.
The document discusses TeSS (Training eSupport System), a central registry maintained by ELIXIR to index training materials from various sources using a scraping process. It notes challenges with scraping including being labor intensive, sensitive to changes, and not scalable. The document proposes using Schema.org annotations on web pages to facilitate automated indexing of training materials and events. It encourages involvement in the TeSS and BioSchemas initiatives.
The Biodiversity Catalogue and support for Web Map Services - TDWG 2015Niall Beard
This document discusses web services and the Biodiversity Catalogue. It defines what web services are and provides an overview of the Biodiversity Catalogue, which allows users to register, find, and invoke web services. The Biodiversity Catalogue benefits both service providers and community members. For service providers, it provides easy registration, exposure to potential users, and community annotation of services. For community members, it allows exploration of web services through search and filtering and ensures long-term reliability of services through monitoring. The document also provides details on Web Map Services (WMS) and examples of WMS usage.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
2. Overview
• Motivation: Web services discovery problem
• Structure of Service Metadata
• Ontological Classification
• Features
• Site Statistics
• Remarks
• http://biodiversitycatalogue.org
3. Web Services Discovery
How can I advertise
my Web services? What information do
people need about
them?
What can this Web
service do? How do I
use it? How do I know the
Web service will still
be working
tomorrow?
How can I find the
right Web service?
Web
Service
Provide
r
Scientist
6. • Web service
MONITORING
– Services change and
get outdated
– Long term reliability
– Testing on a daily basis
Monitoring
11/06/2014 pro-iBiosphere Final Event, Brussels 6
10. Other Features
• Data Search – Search using your data to find
services with corresponding input output
types
• Script Testing (Defunct) – Write tests to check
the operational functionality of the web
service
11. Structure of Service Metadata
• Profile
• Documentation URL
• Description
• License
• Cost
• Contact Info
• Usage Conditions
• How to cite
• Publications about service
• Example workflows
• Maturity
12. Structure of Service Metadata
• Technical
•Description of endpoints/operations
•Example endpoints
•Documentation URLs
• Input Parameters
– Description
– Default Value
– Constrained Values
– Example Data
– Required or Optional
• Output Representations
– Content Type (e.g. text/csv)
– Example data
– Data formats
– Data Schemas
– Tags
14. Bio.tools Integration
• BioCatalogue exports its
tools to the bio.tools
registry
• Bio.tools uses the EDAM
ontology to annotate.
• Topic
• Data
• Format
• Operation
• Unfortunately no such
detail ontology tagging
beyond topics
17. Remarks
• Annotations work well for assisting
construction of workflows
– Structured enough to help humans read
– Free text annotations mean no service type or
domain knowledge needed.
• Semantic annotations would work better
– Automated workflow construction would be the
holy grail
– Curation costs prohibitive
18. Example Data Services
• Ecological Niche Modelling with
OpenModeller
• https://www.biodiversitycatalogue.org/ser
vices/1