Harvesting Using the Open Archives Initiative Protocol: What Your OAI Stream ...Sandra McIntyre
Webinar from the Mountain West Digital Library
Sandra McIntyre, MWDL Director
Anna Neatrour, MWDL Digital Metadata Librarian
Want to understand what happens behind the scenes with the MWDL harvesting? In this webinar, Sandra McIntyre and Anna Neatrour will explain the Open Archives Initiative Protocol for Metadata Harvestiong (OAI-PMH) and how it makes metadata aggregation possible in the MWDL. They will explain the process of harvesting and how MWDL normalizes your metadata. They will also show you how you can learn from your collections' OAI stream by using the six query verbs (requests) defined in the OAI-PMH.
Harvesting Using the Open Archives Initiative Protocol: What Your OAI Stream ...Sandra McIntyre
Webinar from the Mountain West Digital Library
Sandra McIntyre, MWDL Director
Anna Neatrour, MWDL Digital Metadata Librarian
Want to understand what happens behind the scenes with the MWDL harvesting? In this webinar, Sandra McIntyre and Anna Neatrour will explain the Open Archives Initiative Protocol for Metadata Harvestiong (OAI-PMH) and how it makes metadata aggregation possible in the MWDL. They will explain the process of harvesting and how MWDL normalizes your metadata. They will also show you how you can learn from your collections' OAI stream by using the six query verbs (requests) defined in the OAI-PMH.
This document is the proposal for the TRACKOER project that is supported by the JISC Open Educational Resources Rapid Innovation programme (OERRI). In TRACKOER we are developing potential solutions to how to keep track of content in the open. We are looking both at ways to follow content as it moves from one server to another and then gets reused, and at how to capture other changes that people may make with cut and paste editing. The rationale for the project is to understand whether content gets reused but it also offers a model that could help track other activity around shared content. More about the project progress is available via http://track.olnet.org/ and the project blog at http://cloudworks.ac.uk/tag/view/TrackOER .
Open Data management is still not trivial nor sustainable - COMSODE results are here to bring automation to publication and management of Open Data in public institutions and companies. Presentation includes Open Data Ready standard proposal, three use cases and invitation for Horizon 2020 projects 2016.
Text Mining - Techniques & Limitations (A Pharmaceutical Industry Viewpoint)Frank Oellien
Presentation given at the 6th and last meeting of the European Commission "Licenses for Europe" Text and Data Mining Working Group (WG4).
The first part of the talk gives a very brief introduction of some basic concepts of text mining techniques used in Pharmaceutical industry using the Accelrys PP text mining collection.
The second part of the talk focuses on existing limitations pharmaceutical companies are facing in the field of Text mining.
http://ec.europa.eu/licences-for-europe-dialogue/en/content/text-and-data-mining-working-group-wg4
Better together: building services for public good on top of content from the...petrknoth
CORE hosts the world’s largest collection of open access full texts, offering seamless, unrestricted access to research for citizens, researchers, libraries, software developers, funders and others. CORE’s aggregated content comes from thousands of institutional and subject repositories as well as journals and covers all research disciplines. In January 2019, CORE has hit the mark of 10 million monthly active users (10.41 million users). In September 2019, core.ac.uk has made it to the top 5k websites globally by user engagement as measured by the independent Alexa Rank, making it clearly one of the world’s most widely used Open Access services.
In this talk, Petr and Nancy will explain the role of CORE in the open science ecosystem. They will introduce the solutions CORE offers for improving the delivery of research literature, including tools for discovering freely available copies of papers that might be behind publishers’ paywalls as well as a recommender system for open access literature. The use of CORE data to monitor compliance with open access policies has also recently received attention. The presenters will then reflect on the challenges in the sector and share their experience of building value-added services for the society on top of open content offered by libraries and their affiliated institutional repositories and open access journals.
This presentation was provided by Rachel Kessler of ProQuest at the NISO Annual Members Meeting and Standards Update," held on June 26, 2020. It provides an overview of NISO activities during the calendar year of 2019.
Manuel Noya talks about the science-industry relationship driven by competitive intelligence and how to surf emerging technologies
Workshop title:TDM unlocking a goldmine of information
Training overview:
Text and Data Mining (TDM) is a natural ‘next step’ in open science. It can lead to new and unexpected discoveries and increase the impact of publications and repositories. This workshop showcases examples of successful TDM and infrastructural solutions for researchers. We will also discuss what is needed to make most of infrastructures and how publishers and repositories can open up their content.
DAY 2 - PARALLEL SESSION 4 & 5
This presentation was provided by Laura Morse in informing participants about progress made on the Open Discovery Initiative at the NISO Standards Update event held during ALA Midwinter, Saturday, January 25, 2020.
I presented this keynote talk at the WorldComp conference in Las Vegas, on July 13, 2009. In it, I summarize what grid is about (focusing in particular on the "integration" function, rather than the "outsourcing" function--what people call "cloud" today), using biomedical examples in particular.
Screenshots prepared by Ben Blaiszik and Kyle Chard, used in our Globus publication demo at GlobusWorld 2014. See https://www.globus.org/data-publication for more information and the notes on the slides for details.
Connecting repositories - Open University - The CORE project aims to make it easier to navigate between relevant scientific papers stored in Open Access repositories. The project will use Linked Data format to describe the relationships between papers stored across a selection of UK repositories, including the Open University Open Research Online (ORO). A resource discovery web-service and a demonstrator client will be provided to allow UK repositories to embed this new tool into their own repository.
This document is the proposal for the TRACKOER project that is supported by the JISC Open Educational Resources Rapid Innovation programme (OERRI). In TRACKOER we are developing potential solutions to how to keep track of content in the open. We are looking both at ways to follow content as it moves from one server to another and then gets reused, and at how to capture other changes that people may make with cut and paste editing. The rationale for the project is to understand whether content gets reused but it also offers a model that could help track other activity around shared content. More about the project progress is available via http://track.olnet.org/ and the project blog at http://cloudworks.ac.uk/tag/view/TrackOER .
Open Data management is still not trivial nor sustainable - COMSODE results are here to bring automation to publication and management of Open Data in public institutions and companies. Presentation includes Open Data Ready standard proposal, three use cases and invitation for Horizon 2020 projects 2016.
Text Mining - Techniques & Limitations (A Pharmaceutical Industry Viewpoint)Frank Oellien
Presentation given at the 6th and last meeting of the European Commission "Licenses for Europe" Text and Data Mining Working Group (WG4).
The first part of the talk gives a very brief introduction of some basic concepts of text mining techniques used in Pharmaceutical industry using the Accelrys PP text mining collection.
The second part of the talk focuses on existing limitations pharmaceutical companies are facing in the field of Text mining.
http://ec.europa.eu/licences-for-europe-dialogue/en/content/text-and-data-mining-working-group-wg4
Better together: building services for public good on top of content from the...petrknoth
CORE hosts the world’s largest collection of open access full texts, offering seamless, unrestricted access to research for citizens, researchers, libraries, software developers, funders and others. CORE’s aggregated content comes from thousands of institutional and subject repositories as well as journals and covers all research disciplines. In January 2019, CORE has hit the mark of 10 million monthly active users (10.41 million users). In September 2019, core.ac.uk has made it to the top 5k websites globally by user engagement as measured by the independent Alexa Rank, making it clearly one of the world’s most widely used Open Access services.
In this talk, Petr and Nancy will explain the role of CORE in the open science ecosystem. They will introduce the solutions CORE offers for improving the delivery of research literature, including tools for discovering freely available copies of papers that might be behind publishers’ paywalls as well as a recommender system for open access literature. The use of CORE data to monitor compliance with open access policies has also recently received attention. The presenters will then reflect on the challenges in the sector and share their experience of building value-added services for the society on top of open content offered by libraries and their affiliated institutional repositories and open access journals.
This presentation was provided by Rachel Kessler of ProQuest at the NISO Annual Members Meeting and Standards Update," held on June 26, 2020. It provides an overview of NISO activities during the calendar year of 2019.
Manuel Noya talks about the science-industry relationship driven by competitive intelligence and how to surf emerging technologies
Workshop title:TDM unlocking a goldmine of information
Training overview:
Text and Data Mining (TDM) is a natural ‘next step’ in open science. It can lead to new and unexpected discoveries and increase the impact of publications and repositories. This workshop showcases examples of successful TDM and infrastructural solutions for researchers. We will also discuss what is needed to make most of infrastructures and how publishers and repositories can open up their content.
DAY 2 - PARALLEL SESSION 4 & 5
This presentation was provided by Laura Morse in informing participants about progress made on the Open Discovery Initiative at the NISO Standards Update event held during ALA Midwinter, Saturday, January 25, 2020.
I presented this keynote talk at the WorldComp conference in Las Vegas, on July 13, 2009. In it, I summarize what grid is about (focusing in particular on the "integration" function, rather than the "outsourcing" function--what people call "cloud" today), using biomedical examples in particular.
Screenshots prepared by Ben Blaiszik and Kyle Chard, used in our Globus publication demo at GlobusWorld 2014. See https://www.globus.org/data-publication for more information and the notes on the slides for details.
Connecting repositories - Open University - The CORE project aims to make it easier to navigate between relevant scientific papers stored in Open Access repositories. The project will use Linked Data format to describe the relationships between papers stored across a selection of UK repositories, including the Open University Open Research Online (ORO). A resource discovery web-service and a demonstrator client will be provided to allow UK repositories to embed this new tool into their own repository.
Amicable resources corporate presentation- Human resource companyrachna1122
Amicable resources, being every client's satisfaction partner provide innovative HR solutions by continually exceeding customer expectations. We are a HR consulting firm with a strong team and provides the best of Human resource solutions to our clients. Whether it be recruitment or trainings or HR advisory, we always come up with best of the solutions.
My repository is being aggregated: a blessing or a curse?petrknoth
Usage statistics are frequently used by repositories to justify their value to the management who
decide about the funding to support the repository infrastructure. Another reason for collecting usage statistics at
repositories is the increased use of webometrics in the process of assessing the impact of publications and
researchers. Consequently, one of the worries repositories sometimes have about their content being aggregated
is that they feel aggregations have a detrimental effect on the accuracy of statistics they collect. They believe
that this potential decrease in reported usage can negatively influence the funding provided by their own
institutions. This raises the fundamental question of whether repositories should allow aggregators to harvest
their metadata and content. In this paper, we discuss the benefits of allowing content aggregations harvest
repository content and investigate how to overcome the drawbacks.
CORE: Aggregating and Enriching Content to Support Open Accesspetrknoth
The last 10 years have seen a massive increase in the amounts of Open Access publications available in journals and institutional repositories. The existence of large volumes of free state-of-the-art knowledge online has the potential to provide huge savings and benefits in many fields. However, in order to fully leverage this knowledge, it is necessary to develop systems that (a) make it easy for users to access, discover and explore this knowledge, (b) that lower the barriers to the development of systems and services building on top of this knowledge and (c) that enable to freely analyse how this knowledge is organised and used. In this paper, we argue why these requirements should be fulfilled and show that current systems do not satisfy them. We also present CORE, a large-scale Open Access aggregation system, outline its functionality and architecture and demonstrate how it addresses the above mentioned needs and how it can be applied to benefit the whole ecosystem including institutional repositories, researchers, general public and government.
Semantometrics: Towards Fulltext-based Research Evaluationpetrknoth
Over the recent years, there has been a growing interest in developing new scientometric measures that could go beyond the traditional citation-based bibliometric measures. This interest is motivated on one side by the wider availability or even emergence of new information evidencing research performance, such as article downloads, views and twitter mentions, and on the other side by the continued frustrations and problems surrounding the application of citation-based metrics to evaluate research performance in practice.
Semantometrics are a new class of research evaluation metrics which build on the premise that full text is needed to assess the value of a publication. This talk will present the results of an investigation into the properties of the semantometric contribution measure (Knoth & Herrmannova, 2014). We will provide a comparative evaluation of the contribution measure with traditional bibliometric measures based on citation counting.
OSFair2017 training | Machine accessibility of Open Access scientific publica...Open Science Fair
Petr Knoth talks about machine accessibility of Open Access scientific publications from publisher systems via ResourceSync
Training title:TDM unlocking a goldmine of information
Training overview:
Text and Data Mining (TDM) is a natural ‘next step’ in open science. It can lead to new and unexpected discoveries and increase the impact of publications and repositories. This workshop showcases examples of successful TDM and infrastructural solutions for researchers. We will also discuss what is needed to make most of infrastructures and how publishers and repositories can open up their content.
DAY 2 - PARALLEL SESSION 4 & 5
OSFair2017 Workshop | Building a global knowledge commons - ramping up reposi...Open Science Fair
Eloy Rodrigues, Petr Knoth & Kathleen Shearer showcase the conceptual model for this vision, as well as the role and functions of repositories within this model.
Workshop title: Building a global knowledge commons - ramping up repositories to support widespread change in the ecosystem
Workshop abstract:
The extensive international deployment of repository systems in higher education and research institutions, as well as scholarly communities, provides the foundation for a distributed, globally networked infrastructure for scholarly communication. This distributed network of repositories can and should be a powerful tool to promote the transformation of the scholarly communication ecosystem. However, repository platforms are still using technologies and protocols designed almost twenty years ago, before the boom of the web and the dominance of Google, social networking, semantic web and ubiquitous mobile devices. In April 2016, the Confederation of Open Access Repositories (COAR) launched a working group to help identify new functionalities and technologies for repositories and develop a road map for their adoption. For the past several months, the group has been working to define a vision for repositories and sketch out the priority user stories and scenarios that will help guide the development of new functionalities. The results of this work will be available in the summer of 2017.
This workshop will present the functionalities and technologies for the next generation of repositories and reflect on how these functionalities will be adopted into the existing software platforms. In addition, participants will discuss the important implications for the network layers, and how repositories will uniformly interact with the networks to provide value added services on top of their content.
DAY 3 - PARALLEL SESSION 6 & 7
http://www.opensciencefair.eu/workshops/parallel-day-3-1/building-a-global-knowledge-commons-ramping-up-repositories-to-support-widespread-change-in-the-ecosystem
A presentation given at the EuroSakai 2011 conference in Amsterdam on 27th September 2011. It covers the work of the CLIF project to investigate the management of the digital lifecycle across systems, using the integration of the Sakai collaboration and learning environment with the Fedora digital repository system as an exemplar.
COAR Next Generation Repositories WG - Text mining and Recommender system sto...petrknoth
One of the key aims of the COAR NGR group is to help us to overcome the challenges that still make it difficult to move beyond repositories as document silos. The group wants to see a globally interoperable network of repositories and global services built on top of repositories fulfilling the expectations of users in the 21st century. During this talk, I will address two use cases the COAR NGR working group aims to enable: text and data mining and recommender systems.
Towards effective research recommender systems for repositoriespetrknoth
In this paper, we argue why and how the integration of recommender systems for research can enhance the functionality and user experience in repositories. We present the latest technical innovations in the CORE Recommender, which provides research article recommendations across the global network of repositories and journals. The CORE Recommender has been recently redeveloped and released into production in the CORE system and has also been deployed in several third-party repositories. We explain the design choices of this unique system and the evaluation processes we have in place to continue raising the quality of the provided recommendations. By drawing on our experience, we discuss the main challenges in offering a state-of-the-art recommender solution for repositories. We highlight two of the key limitations of the current repository infrastructure with respect to developing research recommender systems: 1) the lack of a standardised protocol and capabilities for exposing anonymised user-interaction logs, which represent critically important input data for recommender systems based on collaborative filtering and 2) the lack of a voluntary global sign-on capability in repositories, which would enable the creation of personalised recommendation and notification solutions based on past user interactions.
Cloud web scale discovery services landscape an overviewNikesh Narayanan
Abstract
The impact of Internet and Google like search engines radically influenced the information behavior of Net Generation users. They expect same environment in library services such that all their required information make available in a single set of results through unified search across all the available resources. Libraries have been striving to respond to this challenge for years. Until recently, federated search technology of the past decade was the better attempt in this area to meet these user expectations. But federated search solution is marked by the drawbacks of its slowness as it searches each database on the fly. New Generation cloud based Library Web scale discovery technology is a promising entrant in this landscape. This Paper attempts to provide a comprehensive overview of Library Web Scale Discovery solutions by depicting various facets of Web Scale Discovery solutions such as its importance to Library field, their possible role as the starting point for research, content coverage, and finally analyses the competition at the discovery front by comparing the services of major players. The comparative analysis shows that all the major service providers are extending competitive features and services, but varies in some areas and the adoption choice depends on the concerned library’s preferences and the cost involved.
This work presents a data architecture based on semantic web technologies that support to the inclusion of open materials in massive online courses. The framework provides transparent access to RDF data sources for Open Educational Resources stored in OpenCourseWare repositories.
Speaker(s): Nelson Piedra and Edmundo Tovar
How can repositories support the text-mining of their content and why? Nancy Pontika
Co-presented with Petr Knoth http://www.slideshare.net/petrknoth/ at the "Mining Repositories: How to assist the research and academic community on their text and data mining needs" workshop, which took place at the 11th International Conference on Open Repositories, Monday 13 June 2016.
Capturing Conversations, Context and Curricula: The JLeRN Experiment and the ...Sarah Currier
These slides accompany the paper "Capturing Conversations, Context and Curricula: The JLeRN Experiment and the Learning Registry" published by the Cambridge 2012: Innovation and Impact - Openly Collaborating to Enhance Education conference, organised by OCWC and SCORE (Support Centre for Open Resources in Education).
These slides accompany the paper "Capturing Conversations, Context and Curricula: The JLeRN Experiment and the Learning Registry" published by the Cambridge 2012: Innovation and Impact - Openly Collaborating to Enhance Education conference, organised by OCWC and SCORE (Support Centre for Open Resources in Education).
Qui Bono? Cumulative advantage in open access publishingpetrknoth
We identify whether barriers related to accessing research literature, such as being located at an institution with limited access to non-OA literature, affect the citation behaviour of scholars. Question 1: Do scholars located in less developed or at less prestigious institutions rely (consume) more OA because their access to subscription literature is limited?
Question 2: Do those who benefit from OA also produce more OA or are production and consumption independent?
Presentation of the CORE APIv3 which provides seamless programmable access to the metadata and content from across the global repositories network delivered at Open Repositories 2022.
OAI Identifiers: Decentralised PIDs for Research Outputs in Repositoriespetrknoth
An OAI (Open Archives Initiative) identifier is a unique identifier of a metadata record. OAI identifiers are used in the context of repositories using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH), however, the process by which they are assigned can be, in principle, used more broadly elsewhere.
In comparison to DOIs, OAI identifiers are registered in a distributed rather than centralised manner and there is therefore no cost for minting them. OAI identifiers are persistent identifiers in repositories that declare their level of support for deleted documents in the deletedRecord element of the Identify response as persistent. CORE recommends repositories to provide this persistent level of support.
OAI Identifiers are viable PIDs for repositories that can be, as opposed to DOIs, minted in a distributed fashion and cost-free, and which can be resolvable directly to the repository rather than to the publisher.
This approach has the potential to increase the importance of repositories in the process of disseminating knowledge. CORE provides a global OAI Resolver built on top of the CORE research outputs aggregation system.
Tracking compliance of the REF2021 policy with the CORE Repository Dashboardpetrknoth
CORE and REF2021 audit. Where is CORE mentioned in the REF 2021 OA Policy. How CORE collects data. CORE Repository Dashboard demonstration.
Getting access to the Dashboard
Despite being controversial, research metrics are becoming a key component of research evaluation processes globally. Nevertheless, accessing research metrics to support these processes in a timely manner is not a straightforward task, as it requires either having access to expensive commercial solutions such as Elsevier SciVal or Clarivate Analytics' InCites, or having substantial knowledge of existing APIs and data sources as well as the ability and skills needed to analyse large amounts of raw scholarly data in-house. This is especially the case on a department or institutional level where large amounts of data have to be aggregated prior to analysis. To alleviate this problem we have designed and prototyped CORE Analytics Dashboard – a tool for analytical evaluation of research outputs of universities. The aim of the CORE Analytics Dashboard is to help universities analyse their performance using a variety of metrics captured from openly available data sources, including citation counts and social media metrics, and to help them compare their performance with other institutions. This paper presents the motivation behind developing this dashboard and its main features.
Analysing the performance of open access papers discovery toolspetrknoth
Open Access discovery tools aim to locate freely available copies of research papers which might be behind the paywall on a publisher’s website. Our study provides a large scale quantitative performance comparison of several OA discovery tools on a randomly selected sample of 100k DOIs from CrossRef. We use the acquired knowledge from this analysis to build a new discovery tool - CORE Discovery.
Assessing Compliance with the UK REF 2021 Open Access Policypetrknoth
The recent increase in Open Access (OA) policies has brought forth important questions concerning the effect these policies have on the practice of publishing Open Access. In particular, is there evidence to support that mandating OA increases the proportion of OA outputs (in other words, do authors comply with relevant policies)? Furthermore, does mandating OA reduce the time from acceptance to the public availability of research outputs, and can compliance with OA mandates be effectively tracked? This work studies compliance with the UK REF 2021 Open Access policy. We use data from CrossRef and from CORE to create a dataset containing 1.6 million publications. We show that after the introduction of the UK OA policy, the proportion of OA research outputs in the UK has increased significantly, and the time lag between the acceptance of a publication and its Open Access availability has decreased, although there are significant differences in compliance between different repositories. We have developed a tool that can be used to assess publications' compliance with the policy based on a list of DOIs.
Integrating research indicators for use in the repositories infrastructure petrknoth
The current repository infrastructure, which consists of thousands of repositories, does not make effective use of research indicators largely exploited by commercial players in the area. Research indicators, including citation counts and Mendeley reader counts, enable the development and improvement of functionality researchers use on a daily basis. For example, they make it possible to increase the performance in information retrieval and recommendation tasks and serve as an enabler for the development of research analytics & metrics functionality, such as the analysis of research trends or collaboration networks. We believe that there is a strong case for making a better use of these indicators within the repositories infrastructure to improve the functionality of services users rely on.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Assuring Contact Center Experiences for Your Customers With ThousandEyes
From Open Access Metadata to Open Access Content: Two Principles for Increased Visibility of Open Access Content
1. 1/31
From Open Access Metadata to
Open Access Content
Two Principles for Increased Visibility of Open
Access Content
Petr Knoth
CORE (Connecting REpositories) project
Knowledge Media institute
The Open University
@petrknoth, #diggicore
2. 2/31
Why from OA metadata to OA content?
• Free online availability of research outputs (not metadata of
research outputs) is the main goal of open access (OA)
• Repositories are one of the recommended ways to achieve this
(BOAI,2002).
• Despite large amount of OA content already available online
(Laakso & Bjork, 2012), OA content is not necessarily easily
discoverable (Morrisson, 2012; Konkiel, 2012).
• Often available, but difficult to find …
• Inhibiting the OA impact – accessibility, discoverability, reuse …
• Discoverability of OA content on the Web can be dramatically
increased by adopting two simple principles!
3. 3/31
Outline
1. Goals of repositories
2. The bleak truth about availability of OA metadata vs content
3. Content referencing practises in repositories
4. Two principles to increase visibility of OA content
4. 4/31
Outline
1. Goals of repositories (repositories as large metadata silos)
2. The bleak truth about availability of OA metadata vs content
3. Content referencing practises in repositories
4. Two principles to increase visibility of OA content
5. 5/31
The primary purpose of repositories
• Institutional repositories (IRs) serve a number of purposes; such
collecting and curating digital outputs, providing statistics,
research excellence, etc.
• The primary goal of repositories is to open and disseminate
research outputs to a worldwide audience (Crow, 2002) –
SPARC’s position paper on the case for institutional repositories.
6. 6/31
SPARC’s position paper on IRs
“For the repository to provide access to the broader research
community, users outside the university must be able to find and
retrieve information from the repository. Therefore, institutional
repository systems must be able to support interoperability in order
to provide access via multiple search engines and other discovery
tools. An institution does not necessarily need to implement
searching and indexing functionality to satisfy this demand: it could
simply maintain and expose metadata, allowing other services to
harvest and search the content. This simplicity lowers the barrier to
repository operation for many institutions, as it only requires a file
system to hold the content and the ability to create and share
metadata with external systems.”
7. 7/31
COAR: About harvesting and aggregations …
“Each individual repository is of limited value for research: the real
power of Open Access lies in the possibility of connecting and tying
together repositories, which is why we need interoperability. In
order to create a seamless layer of content through connected
repositories from around the world, Open Access relies on
interoperability, the ability for systems to communicate with each
other and pass information back and forth in a usable format.
Interoperability allows us to exploit today's computational power so
that we can aggregate, data mine, create new tools and services,
and generate new knowledge from repository content.’’
[COAR manifesto]
8. 8/31
We need OA to content (not just metadata)
• Repositories (even the most prominent) often seen by
aggregation systems as large metadata.
• OA to metadata is not disruptive. Little difference to the
traditional publishing model.
9. 9/31
Outline
1. Goals of repositories (repositories as large metadata silos)
2. The bleak truth about availability of OA metadata vs content
3. Content referencing practises in repositories
4. Two principles to increase visibility of OA content
10. 10/31
Study
• 83 repositories (mainly EPrints with pdf research outputs)
• 1,461,016 metadata records
• Ratio of metadata to content
• Data acquired from CORE (Knoth & Zdrahal, 2012)
13. 13/31
Why is this a problem?
• Lower accessibility of papers (we have them, but cannot find
them)
• Text-mining
• Cannot monitor growth
• Loosing a strong argument for the adoption of OA!
14. 14/31
Outline
1. Goals of repositories (repositories as large metadata silos)
2. The bleak truth about availability of OA metadata vs content
3. Content referencing practises in repositories
4. Two principles to increase visibility of OA content
15. 15/31
OAI-PMH and content referencing
• OAI-PMH supports representing metadata in multiple formats,
but at a minimum repositories must be able to return records
with metadata expressed in the Dublin Core format (OAI-PMH
v2.0, 2008)
• If repositories want to satisfy the SPARC guidelines (Crow, 2002),
they must provide a link to the content as part of the exposed
metadata.
16. 16/31
OAI-PMH and content referencing
The Open Research Online repository (Eprints) links directly to the
resource from metadata.
Cranfield repository (DSpace) identifies the resource by providing a
link to a page from which the resource (if available) can be accessed.
17. 17/31
OAI-PMH and content referencing
The OAI-PMH specification states on this topic that:
“The nature of a resource identifier is outside the scope of the OAI-
PMH. To facilitate access to the resource associated with harvested
metadata, repositories should use an element in metadata records
to establish a linkage between the record (and the identifier of its
item) and the identifier (URL, URN, DOI, etc.) of the associated
resource. The mandatory Dublin Core format provides the identifier
element that should be used for this purpose.”
18. 18/31
OAI-PMH and content referencing
• What is an identifier of the associated resource?
Is a splash page an identifier? According to OAI-PMH examples it is:
<dc:identifier>http://arXiv.org/abs/cs/0112017</dc:identifier>
• The standard is pretty weak on this aspect …
19. 19/31
Outline
1. Goals of repositories (repositories as large metadata silos)
2. The bleak truth about availability of OA metadata vs content
3. Content referencing practises in repositories
4. Two principles to increase visibility of OA content
20. 20/31
The principles of the principles
• Pragmatic rather than exciting.
• Generating maximum benefit for a minimum investment.
• Deliberately use current standards to minimise adoption time.
• Respecting differences across systems and backwards
compatibility.
• Emphasizes the need for easy to use compliance mechanisms to
assist repository managers in ensuring systems interoperability.
21. 21/31
Principle 1 – Content referencing
Open repositories should always establish a link from the metadata
record to the item the metadata record describes using a
dereferencable identifier pointing to the version held in the
repository. The dereferencable identifier should be provided in the
appropriate metadata element in the used metadata format (i.e.
dc:identifier in the case of Dublin Core).
22. 22/31
Implications: Principle 1 – Content referencing
• Repositories can use different standards to deliver metadata over
OAI-PMH (DC, METS, MPEG-21 DIDL)
• Identifier must resolve (be actionable) to the object it identifies
• In the case of DC, if more identifiers are present, use the first
identifier as the identifier of the object
• Should resolve to the version of the object in the local repository
• Similarity with RIOXX identifier field
• The principle is easily applicable in the OA domain: each item can
be freely resolved
23. 23/31
Open access statistics and principle 1
• Only dereferencable
items are OA
• Increases stats acuracy
• Avoids anecdotal
situations (e.g. 23,380
Dark Items)
24. 24/31
Principle 2 – Content accessibility to machines
Open repositories must provide universal access to machines with
the same level of access as humans have. It is the role of open
repositories to allow machines harvest the entire content of the
repository in a reasonable time to enable harvesting systems to
acquire and maintain up-to-date information about the repository
content.
25. 25/31
Example from Arxiv.org
• Googlebot: unrestricted
• Yahoo/MSN: can
reharvest in 6 months
• Researchers: access
denied
26. 26/31
Implications: Principle 2 – Content accessibility to machines
• Accessibility of repository content by machines
• Enabling reuse through new services, such as those relying on
text-mining
• Open Repositories should not discriminate, except for abusive
behavior
• Presumption of innocence
28. 28/31
Conclusions
• Proportion of OA content that can be harvested is fairly low in
comparison to metadata
• Inhibiting the benefits of OA
• Two principles:
1) Dereferencable identifiers - Open Repositories provide open
access to content and not just meatadata
2) Machine access – Open Repositories should provide free access
to content (for anybody, but mainly researchers)
• Compliance validation tools are needed
30. 30/31
References 1/2
[BOAI, 2002] Budapest Open Access Initiative. (2002)
http://www.opensocietyfoundations.org/openaccess/boai-10-recommendations
[Crow, 2002] Crow, R. (2002). The case for institutional repositories: a SPARC position
paper. ARL Bimonthly Report 223.
[Knoth & Zdrahal, 2012] Knoth, P. and Zdrahal, Z. (2012) CORE: Three Access Levels to
Underpin Open Access, D-Lib Magazine, 18, 11/12, Corporation for National Research
Initiatives, http://dx.doi.org/10.1045/november2012-knoth
[Konkiel, 2012] Konkiel, S. (2012) Are Institutional Repositories Doing Their Job?
https://blogs.libraries.iub.edu/scholcomm/2012/09/11/are-institutional-repositories-
doing-their-job/
[Laakso & Bjork, 2012] Laakso, M., & Björk, B. C. (2012). Anatomy of open access
publishing: a study of longitudinal development and internal structure. BMC Medicine,
10(1), 124.
31. 31/31
References 2/2
[Morrison, 2012] Morrison, Louise (2012) 5 reasons why I can’t find Open Access
publications. http://mmitscotland.wordpress.com/2012/08/06/5-reasons-why-i-cant-find-
open-access-publications-2/
[OAI-PMH v2.0, 2008] The Open Archives Initiative Protocol for Metadata Harvesting
Version 2.0 (OAI-PMH), Impementation Guidelines (2008).
http://www.openarchives.org/OAI/openarchivesprotocol.html
[ResourceSync draft, 2013] ResourceSync protocol draft. 2013
http://www.niso.org/workrooms/resourcesync/
[Salo, 2008] Salo, D. (2008). Innkeeper at the roach motel. Library Trends, 57(2), 98-123.
[Van de Sompel et al, 2004] Van de Sompel, H., Nelson, M. L., Lagoze, C., & Warner, S.
(2004). Resource harvesting within the OAI-PMH framework. D-lib magazine, 10(12), 1082-
9873.
Editor's Notes
In this paper, we use the term institutional repositories (which are the main interest of the Open Repositories conference), to refer also to subject-based repositories or archives or systems for depositing research outputs used by open access publishers. As a result, the conclusions of this paper and recommendations are equally valid for both the green (self-archiving) and gold (OA publishing) routes to OA.
In this paper, we use the term institutional repositories (which are the main interest of the Open Repositories conference), to refer also to subject-based repositories or archives or systems for depositing research outputs used by open access publishers. As a result, the conclusions of this paper and recommendations are equally valid for both the green (self-archiving) and gold (OA publishing) routes to OA.
Lets have a look on what some key players in the field think about the purpose of repositories
In this paper, we use the term institutional repositories (which are the main interest of the Open Repositories conference), to refer also to subject-based repositories or archives or systems for depositing research outputs used by open access publishers. As a result, the conclusions of this paper and recommendations are equally valid for both the green (self-archiving) and gold (OA publishing) routes to OA.
It will not be possible transfer to an OA culture unless we change the environment so that there will be clear benefits for researchers to participate in OA. These benefits should be technical rather than political.
In this paper, we use the term institutional repositories (which are the main interest of the Open Repositories conference), to refer also to subject-based repositories or archives or systems for depositing research outputs used by open access publishers. As a result, the conclusions of this paper and recommendations are equally valid for both the green (self-archiving) and gold (OA publishing) routes to OA.
In this paper, we use the term institutional repositories (which are the main interest of the Open Repositories conference), to refer also to subject-based repositories or archives or systems for depositing research outputs used by open access publishers. As a result, the conclusions of this paper and recommendations are equally valid for both the green (self-archiving) and gold (OA publishing) routes to OA.
I can specify the differences between this and RIOXX
I can specify the differences between this and RIOXX
I can specify the differences between this and RIOXX
I can specify the differences between this and RIOXX
I can specify the differences between this and RIOXX
I can specify the differences between this and RIOXX
In this paper, we use the term institutional repositories (which are the main interest of the Open Repositories conference), to refer also to subject-based repositories or archives or systems for depositing research outputs used by open access publishers. As a result, the conclusions of this paper and recommendations are equally valid for both the green (self-archiving) and gold (OA publishing) routes to OA.
In this paper, we use the term institutional repositories (which are the main interest of the Open Repositories conference), to refer also to subject-based repositories or archives or systems for depositing research outputs used by open access publishers. As a result, the conclusions of this paper and recommendations are equally valid for both the green (self-archiving) and gold (OA publishing) routes to OA.
I can specify the differences between this and RIOXX
I can specify the differences between this and RIOXX