The document discusses the University of Kentucky Libraries' efforts to build a digital repository by leveraging partnerships across campus. It outlines how the library advocated for a campus-wide repository model in 2007 and began populating the UKnowledge repository. As new data management requirements emerged from funders like NSF and NIH, the library explored technical options and settled on a microservices-based approach using Hydra, Archivematica, and CDL microservices. The library's roles include technical leadership, metadata, and data management plans, while IT provides storage and infrastructure and research provides policies and proposal support. The initial scope is serving research data needs, with potential future expansion to an enterprise repository.
RDAP14: Collaboration and tension between institutions and units providing da...ASIS&T
Research Data Access and Preservation Summit, 2014
San Diego, CA
March 26-28, 2014
David Minor, University of California, San Diego
Amanda Whitmire, Oregon State University
Stephanie Wright, University of Washington
Lisa Zilinski, Purdue University
RDAP14: It’s a Real World: Developing Preservation Policy for DryadASIS&T
Research Data Access and Preservation Summit, 2014
San Diego, CA
March 26-28, 2014
Ayoung Yoon
Dryad preservation working group, Doctoral Candidate at UNC-‐CH
Sara Mannheimer
Former Dryad curator, Data management librarian at Montana State University
Elena Feinstein, Jane Greenberg, Ryan Scherle
Dryad Digital Repository
Poster RDAP13: Data information literacy multiple paths to a single goalASIS&T
Jake Carlson, Jon Jeffryes, Brian Westra and Sarah Wright
Data Information Literacy: Multiple Paths to a Single Goal
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
Information technology and resources are an integral and indispensable part of the contemporary academic enterprise. In particular, technological advances have nurtured a new paradigm of data-intensive research. However, far too much of this activity still takes place in silos, to the detriment of open scholarly inquiry, integrity, and advancement. To counteract this tendency, the University of California Curation Center (UC3) has been developing and deploying a comprehensive suite of curation services that facilitate widespread data management, preservation, publication, sharing, and reuse. Through these services UC3 is engaging with new communities of use: in addition to its traditional stakeholders in cultural heritage memory organizations, e.g., libraries, museums, and archives, the UC3 service suite is now attracting significant adoption by research projects, laboratories, and individual faculty researchers. This webinar will present an introduction to five specific services – DMPTool, DataUp, EZID, Merritt, Web Archiving Service (WAS) – applicable to data curation throughout the scholarly lifecycle, two recent initiatives in collaboration with UC campuses, UC Berkeley Research Hub and UC San Francisco DataShare, and the ways in which they encourage and promote new communities of practice and greater transparency in scholarly research.
RDAP14: Building a data management and curation program on a shoestring budgetASIS&T
Research Data Access and Preservation Summit, 2014
San Diego, CA
Margaret Henderson
Director, Research Data Management
Virginia Commonwealth University
The goal of the Very Open Data Project is to provide a software-technical foundation for this exchange of data, more specifically to provide an open database platform for data from the raw data coming from experimental measurements or models through intermediate manipulations to finally published results. The sheer expanse of the amount data involved creates some unique software-technical challenges. One of these challenges is addressed in the part of the study presented here, namely to characterize scientific data (with the initial focus being detailed chemistry data from the combustion kinetic community), so that efficient searches can be made. A formalization of this characterization comes in the form of schemas of descriptions of tags and keywords describing data and ontologies describing the relationship between data types and the relationship between the characterizations themselves. These will be translated to meta-data tags connected to the data points within a non-relational data of data for the community.
The focus of the initial work will be on data and its accessibility. As the project progresses, the emphasis will shift on not only having available data accessible for the community, but that the community itself will be able to, with emphasis on minimal effort, will be able contribute their own data. This will involve, for example, the concepts of the ‘electronic lab notebook’ and the existence and availability of extensive concept extraction tools, primarily from the chemical informatics field.
RDAP14: Collaboration and tension between institutions and units providing da...ASIS&T
Research Data Access and Preservation Summit, 2014
San Diego, CA
March 26-28, 2014
David Minor, University of California, San Diego
Amanda Whitmire, Oregon State University
Stephanie Wright, University of Washington
Lisa Zilinski, Purdue University
RDAP14: It’s a Real World: Developing Preservation Policy for DryadASIS&T
Research Data Access and Preservation Summit, 2014
San Diego, CA
March 26-28, 2014
Ayoung Yoon
Dryad preservation working group, Doctoral Candidate at UNC-‐CH
Sara Mannheimer
Former Dryad curator, Data management librarian at Montana State University
Elena Feinstein, Jane Greenberg, Ryan Scherle
Dryad Digital Repository
Poster RDAP13: Data information literacy multiple paths to a single goalASIS&T
Jake Carlson, Jon Jeffryes, Brian Westra and Sarah Wright
Data Information Literacy: Multiple Paths to a Single Goal
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
Information technology and resources are an integral and indispensable part of the contemporary academic enterprise. In particular, technological advances have nurtured a new paradigm of data-intensive research. However, far too much of this activity still takes place in silos, to the detriment of open scholarly inquiry, integrity, and advancement. To counteract this tendency, the University of California Curation Center (UC3) has been developing and deploying a comprehensive suite of curation services that facilitate widespread data management, preservation, publication, sharing, and reuse. Through these services UC3 is engaging with new communities of use: in addition to its traditional stakeholders in cultural heritage memory organizations, e.g., libraries, museums, and archives, the UC3 service suite is now attracting significant adoption by research projects, laboratories, and individual faculty researchers. This webinar will present an introduction to five specific services – DMPTool, DataUp, EZID, Merritt, Web Archiving Service (WAS) – applicable to data curation throughout the scholarly lifecycle, two recent initiatives in collaboration with UC campuses, UC Berkeley Research Hub and UC San Francisco DataShare, and the ways in which they encourage and promote new communities of practice and greater transparency in scholarly research.
RDAP14: Building a data management and curation program on a shoestring budgetASIS&T
Research Data Access and Preservation Summit, 2014
San Diego, CA
Margaret Henderson
Director, Research Data Management
Virginia Commonwealth University
The goal of the Very Open Data Project is to provide a software-technical foundation for this exchange of data, more specifically to provide an open database platform for data from the raw data coming from experimental measurements or models through intermediate manipulations to finally published results. The sheer expanse of the amount data involved creates some unique software-technical challenges. One of these challenges is addressed in the part of the study presented here, namely to characterize scientific data (with the initial focus being detailed chemistry data from the combustion kinetic community), so that efficient searches can be made. A formalization of this characterization comes in the form of schemas of descriptions of tags and keywords describing data and ontologies describing the relationship between data types and the relationship between the characterizations themselves. These will be translated to meta-data tags connected to the data points within a non-relational data of data for the community.
The focus of the initial work will be on data and its accessibility. As the project progresses, the emphasis will shift on not only having available data accessible for the community, but that the community itself will be able to, with emphasis on minimal effort, will be able contribute their own data. This will involve, for example, the concepts of the ‘electronic lab notebook’ and the existence and availability of extensive concept extraction tools, primarily from the chemical informatics field.
RDAP14: OSTP Panel NIH’s Update Public Access ASIS&T
Research Data Access & Preservation Summit
March 26-28, 2014
San Diego, CA
Panel: Funding agency responses to federal requirements for public access to research results
Dr. Neil M. Thakur, National Institutes of Health, Special Assistant to the Deputy Director for Extramural Research
This webinar is intended for librarians, staff, and information professionals interested in improving usability for the DMPTool in their institution. This webinar will also help institutions begin to formalize which individuals or resources will be available to help researchers using the tool. This webinar will be most useful for users that need to customize the tool for their institution.
Poster RDAP13: Research Data in eCommons @ Cornell: Present and FutureASIS&T
Wendy A. Kozlowski, Dianne Dietrich, Gail Steinhart and Sarah Wright
Cornell University Library, Ithaca, NY
Research Data in eCommons @ Cornell: Present and Future
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
February 18 2015 NISO Virtual Conference Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Keynote Address: Data Management Plan Requirements at the US Department of Energy
Laura J. Biven, Ph.D., Senior Science and Technology Advisor, Office of the Deputy Director for Science Programs, Office of Science, US Department of Energy
Dupuich RDAP11 Institutional Repository Case StudiesASIS&T
Jonas Dupuich, Berkeley Electronic Press; Institutional Repository Case Studies; RDAP11 Summit
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Poster RDAP13: A Workflow for Depositing to a Research Data Repository: A Cas...ASIS&T
Betsy Gunia, David Fearon, Benjamin Brosius, Tim DiLauro
JHU Data Management Services
Johns Hopkins University Sheridan Libraries
A Workflow for Depositing to a Research Data Repository: A Case Study for Archiving Publication Data
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
Presentation by Lisa Federer (UCLA) on 16 July 2013 as part of the IMLS-sponsored DMPTool Webinar Series.
Description: This webinar will discuss the special needs of health sciences researchers and help you learn how to talk to researchers in the health and medical fields about their data management needs. We will cover NIH Data Sharing Policy and how to write a data management plan that meets NIH’s requirements. After viewing this webinar, participants will understand: who is required to submit a plan; specific information that should be included in a plan; how to use the DMPTool to write an NIH-specific DMP; and where to find additional resources for help.
A demonstration of the DMPTool, which helps researchers create data management plans now required by the Nat'l Science Foundation and other US grant funding agencies. See http://www.cdlib.org/uc3/webinars/20111019/
for recording.
DMPTool Webinar Series 1: Introduction to DMPTool Carly Strasser
Slides from DMPTool Webinar Series 1: Introduction to DMPTool, given 28 May 2013. Recording available at http://www.cdlib.org/services/uc3/uc3webinars.html
NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Enabling transparency and efficiency in the research landscape
Dr. Melissa Haendel, Associate Professor, Ontology Development Group, OHSU Library, Department of Medical Informatics and Epidemiology, Oregon Health & Science University
Integration of research literature and data (InFoLiS)Philipp Zumstein
Talk at CNI 2015 Spring Membership Meeting in Seattle on April 14th, 2015, see http://www.cni.org/events/membership-meetings/upcoming-meeting/spring-2015/
Abstract: The goal of the InFoLiS project is to connect research data and publications. Links between data and literature are created automatically by means of text mining and made available as Linked Open Data (LOD) for seamless integration into different retrieval systems. This enables scientists to directly access information about corresponding research data in a literature information system, and, vice versa, it is possible to directly find different interpretations and analyses in the literature of the same research data. In our talk, we will describe our methods for generating the links and give insight into the Linked Data infrastructure including the services we are currently building. Most importantly, we will detail how our solutions can be used by other institutions and invite all interested participants to discuss with us their ideas and thoughts on the requirements for these services to ensure broad interoperability with existing systems and infrastructures. InFoLiS is a joint project by the GESIS – Leibniz Institute for the Social Sciences, Cologne, Mannheim University Library, and Mannheim University supported by a grant from the DFG – German Research Foundation.
RDAP 15 EarthCollab: Connecting Scientific Information Sources using the Sema...ASIS&T
Research Data Access and Preservation Summit, 2015
Minneapolis, MN
April 22-23, 2015
Erica M. Johns, Jon Corson-Rikert, Huda J. Khan, Dean B. Krafft and Matthew S. Mayernik
Talk given at the Data Visualisation and the Future of Academic Publishing event. https://www.eventbrite.com/e/data-visualisation-and-the-future-of-academic-publishing-tickets-25372801733?password=dataviz
RDAP14: OSTP Panel NIH’s Update Public Access ASIS&T
Research Data Access & Preservation Summit
March 26-28, 2014
San Diego, CA
Panel: Funding agency responses to federal requirements for public access to research results
Dr. Neil M. Thakur, National Institutes of Health, Special Assistant to the Deputy Director for Extramural Research
This webinar is intended for librarians, staff, and information professionals interested in improving usability for the DMPTool in their institution. This webinar will also help institutions begin to formalize which individuals or resources will be available to help researchers using the tool. This webinar will be most useful for users that need to customize the tool for their institution.
Poster RDAP13: Research Data in eCommons @ Cornell: Present and FutureASIS&T
Wendy A. Kozlowski, Dianne Dietrich, Gail Steinhart and Sarah Wright
Cornell University Library, Ithaca, NY
Research Data in eCommons @ Cornell: Present and Future
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
February 18 2015 NISO Virtual Conference Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Keynote Address: Data Management Plan Requirements at the US Department of Energy
Laura J. Biven, Ph.D., Senior Science and Technology Advisor, Office of the Deputy Director for Science Programs, Office of Science, US Department of Energy
Dupuich RDAP11 Institutional Repository Case StudiesASIS&T
Jonas Dupuich, Berkeley Electronic Press; Institutional Repository Case Studies; RDAP11 Summit
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Poster RDAP13: A Workflow for Depositing to a Research Data Repository: A Cas...ASIS&T
Betsy Gunia, David Fearon, Benjamin Brosius, Tim DiLauro
JHU Data Management Services
Johns Hopkins University Sheridan Libraries
A Workflow for Depositing to a Research Data Repository: A Case Study for Archiving Publication Data
Research Data Access & Preservation Summit 2013
Baltimore, MD April 4, 2013 #rdap13
Presentation by Lisa Federer (UCLA) on 16 July 2013 as part of the IMLS-sponsored DMPTool Webinar Series.
Description: This webinar will discuss the special needs of health sciences researchers and help you learn how to talk to researchers in the health and medical fields about their data management needs. We will cover NIH Data Sharing Policy and how to write a data management plan that meets NIH’s requirements. After viewing this webinar, participants will understand: who is required to submit a plan; specific information that should be included in a plan; how to use the DMPTool to write an NIH-specific DMP; and where to find additional resources for help.
A demonstration of the DMPTool, which helps researchers create data management plans now required by the Nat'l Science Foundation and other US grant funding agencies. See http://www.cdlib.org/uc3/webinars/20111019/
for recording.
DMPTool Webinar Series 1: Introduction to DMPTool Carly Strasser
Slides from DMPTool Webinar Series 1: Introduction to DMPTool, given 28 May 2013. Recording available at http://www.cdlib.org/services/uc3/uc3webinars.html
NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Enabling transparency and efficiency in the research landscape
Dr. Melissa Haendel, Associate Professor, Ontology Development Group, OHSU Library, Department of Medical Informatics and Epidemiology, Oregon Health & Science University
Integration of research literature and data (InFoLiS)Philipp Zumstein
Talk at CNI 2015 Spring Membership Meeting in Seattle on April 14th, 2015, see http://www.cni.org/events/membership-meetings/upcoming-meeting/spring-2015/
Abstract: The goal of the InFoLiS project is to connect research data and publications. Links between data and literature are created automatically by means of text mining and made available as Linked Open Data (LOD) for seamless integration into different retrieval systems. This enables scientists to directly access information about corresponding research data in a literature information system, and, vice versa, it is possible to directly find different interpretations and analyses in the literature of the same research data. In our talk, we will describe our methods for generating the links and give insight into the Linked Data infrastructure including the services we are currently building. Most importantly, we will detail how our solutions can be used by other institutions and invite all interested participants to discuss with us their ideas and thoughts on the requirements for these services to ensure broad interoperability with existing systems and infrastructures. InFoLiS is a joint project by the GESIS – Leibniz Institute for the Social Sciences, Cologne, Mannheim University Library, and Mannheim University supported by a grant from the DFG – German Research Foundation.
RDAP 15 EarthCollab: Connecting Scientific Information Sources using the Sema...ASIS&T
Research Data Access and Preservation Summit, 2015
Minneapolis, MN
April 22-23, 2015
Erica M. Johns, Jon Corson-Rikert, Huda J. Khan, Dean B. Krafft and Matthew S. Mayernik
Talk given at the Data Visualisation and the Future of Academic Publishing event. https://www.eventbrite.com/e/data-visualisation-and-the-future-of-academic-publishing-tickets-25372801733?password=dataviz
10-1-13 “Research Data Curation at UC San Diego: An Overview” Presentation Sl...DuraSpace
“Hot Topics: The DuraSpace Community Webinar Series, " Series Six: Research Data in Repositories” Curated by David Minor, Research Data Curation Program, UC San Diego Library. Webinar 1: “Research Data Curation at UC San Diego: An Overview”
Presented by David Minor & Declan Fleming, Chief Technology Strategist, UC San Diego Library
Data grids are an emerging technology that enables the formation of sharable collections from data distributed across multiple storage resources. The integrated Rule Oriented Data System (iRODS) is a data grid developed by the DICE Center at UNC-CH. The iRODS data grid enforces management policies that control properties of the collection. Examples of policies include retention, disposition, distribution, replication, metadata extraction, time-dependent access controls, data processing, data redaction, and integrity checking. Policies can be defined that automate administrative functions (file migration and replication) and that validate assessment criteria (authenticity, integrity, chain of custody). iRODS is used to build data sharing environments, digital libraries, and preservation environments. The iRODS data grid is used at UNC-CH to support the Carolina Digital Repository, the LifeTime Library for the School of Information and Library Science, data grids for the Renaissance Computing Institute (RENCI), collaborations within North Carolina, and both national and international data sharing. At RENCI, the TUCASI data grid supports shared collections between UNC-CH, Duke, and NCSU. The RENCI data grid is federated with ten other data grids including the National Climatic Data Center, the Texas Advanced Computing Center data grid, and the Ocean Observatories Initiative data grid. International applications include the CyberSKA Square Kilometer Array for radio astronomy and the French National Institute for Nuclear Physics and Particle Physics. The collections that are assembled may contain hundreds of millions of files, and petabytes of data. A specific goal is the integration of institutional repositories with the national data infrastructure that is being assembled under the NSF DataNet program. The software is available as an open source distribution from http://irods.diceresearch.org.
Merritt’s micro-services-based architecture provides a number of options for easy integration with diverse external discovery services with specific disciplinary focus on scientific data sharing. By removing many of the barriers faced by researchers interested in data publication, the integrations of Merritt with DataShare and Research Hub exemplify a new service model for cooperative and distributed data sharing. The widespread adoption of such sharing is critical to open scientific inquiry and advancement.
In 2012, the University of Idaho Library began implementing VIVO, an open-source Semantic Web application, both as a discovery layer for its fledgling institutional repository and as a database to describe, visualize, and report university research activity. The presenters will detail some of the challenges they encountered developing this resource, while discussing the tools and techniques they used for obtaining, editing, and uploading institutional data into the RDF-based VIVO system.
University of Minnesota’s Lisa Johnston talks about five ways your library can support researchers when sharing their data. From the October 22, 2015 webinar, How to assist researchers in sharing their research data: http://libraryconnect.elsevier.com/library-connect-webinars?commid=175949
The thorough integration of information technology and resources into scientific workflows has nurtured a new paradigm of data-intensive science. However, far too much research activity still takes place in silos, to the detriment of open scientific inquiry and advancement. Data-intensive science would be facilitated by more universal adoption of good data management practices ensuring the ongoing viability and usability of all legitimate research outputs, including data, and the encouragement of data publication and sharing for reuse. The centerpiece of such data sharing is the digital repository, acting as the foundation for external value-added services supporting and promoting effective data acquisition, publication, discovery, and dissemination. Since a general-purpose curation repository will not be able to offer the same level of specialized user experience provided by disciplinary tools and portals, a layered model built on a stable repository core is an appropriate division of labor, taking best advantage of the relative strengths of the concerned systems.
The Merritt repository, operated by the University of California Curation Center (UC3) at the California Digital Library (CDL), functions as a curation core for several data sharing initiatives, including the eScholarship open access publishing platform, the DataONE network, and the Open Context archaeological portal. This presentation with highlight two recent examples of external integration for purposes of research data sharing: DataShare, an open portal for biomedical data at UC, San Francisco; and Research Hub, an Alfresco-based content management system at UC, Berkeley. They both significantly extend Merritt’s coverage of the full research data lifecycle and workflows, both upstream, with augmented capabilities for data description, packaging, and deposit; and downstream, with enhanced domain-specific discovery. These efforts showcase the catalyzing effect that coupled integration of curation repositories and well-known public disciplinary search environments can have on research data sharing and scientific advancement.
This presentation was provided by Lisa Johnston, University of Minnesota, for a NISO Virtual Conference on data curation held on Wednesday, August 31, 2016
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
3. More Than a “Library” Issue
Worked on IR issues since 2003
Opening Doors to Open Scholarship (white paper)
http: http://uknowledge.uky.edu/libraries_reports/1/
2007
Advocated for campus-wide support
6. Digital Commons and Be Press
Hosted solution
Indexed by Google and Google
Scholar
OAI compliant
Persistent URL
Creation of peer reviewed journals
Content can be delivered back to us
7.
8. What we are putting into UKnowledge?
Electronic Theses and
Dissertations
Articles
Books
Presentations
University press materials
10. Kentucky Digital Library Migration
DLXS to what?
Has to accommodate multiple kinds of data
Not simply a CMS
Use all open source tools
Blacklight as the discovery layer
Settled on a Micro-Services based
repository
14. At the same time a storm is brewing…
New data management
requirements from federal granting
agencies (NSF, NIH, etc)
Various solutions popping up around
campus
16. Partners: Specialized Expertise
Library
Digital preservation
Metadata
Information Technology
Hardware Infrastructure
Cloud Storage solutions
Research
Proposal development
Agency requirements
17. Educating partners
Working from a position of strength
Leverageour expertise and
experience
Confidence in the approach
18. Benchmark Survey/Comparison
Georgia Institute of Technology University of Maryland -
Ohio State University College Park
Pennsylvania State University University of Michigan - Ann
Arbor
Rutgers University - New
Brunswick University of Minnesota - Twin
Cities
Texas A & M University
University of North Carolina at
The University of Texas at Chapel Hill
Austin
University of Pittsburgh -
University of California - Pittsburgh Campus
Berkeley
University of Virginia
University of California - Davis
University of Washington -
University of California - Los Seattle Campus
Angeles
University of Wisconsin –
University of California – San Madison
Diego
University of Florida
University of Illinois at Urbana –
Champaign North Texas University
University of Illinois
19. Options
Contract with a vendor
Leverage D-Space
Bring up a Fedora repository
Look for a new approach
20. Enter Micro-Services! Again.
Nimble
Flexible
Modular
Easily
replaced
Support our overall strategy
http://www.cdlib.org/services/uc3/curation/
22. NSF & NIH E-Print Electronic UKnowledge Digital Digital Other
Research Records Media Library Research
Grant Content Data
Associated
Data
Hydra to address front-end repository functions
Common Metadata Store
Micro-Services to address back-end repository functions
Hybrid Cloud Storage – File &
Object Based
24. NSF & NIH E-Print Electronic UKnowledge Digital Digital Other
Research Records Media Library Research
Grant Content Data
Associated
Data
Hydra to address front-end repository functions
Common Metadata Store
Micro-Services to address back-end repository functions
Hybrid Cloud Storage – File &
Object Based
33. Sustainable preservation strategies
are not built all at once, nor are they
static. Sustainable preservation is a
series of timely actions taken to
anticipate the dynamic nature of
digital information.
- Sustainable Economics for a Digital Planet: Ensuring Long-Term Access to Digital
Information. Blue Ribbon Task Force on Sustainable Preservation and Access.
February 2010.
http://brtf.sdsc.edu/
Editor's Notes
Repositories and the issues surrounding them are not unlike this image of an island. What looks straightforward on the surface can take you down once you start looking below the surface. It is my goal with this presentation to share our Kentucky story and how we are moving incrementally to what we feel is a sustainable solid solution for our campus.
Not a new issue for us. We started working on this a long time ago!I personally had this as my project with the Frye Institute in 2003!UK Libraries formally charged a group to work on this in 2007Needed campus participation and buy in!
This is the model for a repository that we included in that report. Many kinds of content sitting in one repository structure with various ways that content would be served out to users.Essentially we had it right. We just didn’t really know how to execute it. It was really clear this was NOT a library project – it was really big and required skills that we did not have internally
We started looking at options and settled on Digital Commons as an interface and user facing product that would meet many needs.
Allowed us to have a real conversation with faculty
Forced to migrate. Let’s do it right!Content: books, images, maps, finding aids… all straightforwardAdd in newspaper content, oral history interviews – audio, high def videoEric Weig, our Director of Digital Library Services sabbatical project to look at options
Micro-services are an approach to digital curation independent, but interoperable, services that Since each of the services is small and self-contained, they are collectively easier to develop, deploy, maintain, and enhance. Equally as important, they are more easily replaced when they have outlived their usefulness. Although the individual services are narrowly scoped, the complex function needed for effective curation emerges from the strategic combination of individual services.
One search across all contentFaceted discovery
Allow a different interface for UK content – Explore UK
Allowing us to start the conversations and develop partnerships on campus
Try to control this moving constantly changing landscape!
Each partner brings strengths
Agreed to start to actively pursue an enterprise wide repository
New CIOFed grant requirementsBudget squeezing everyoneMore awareness
Trusted Digital Repository – holds all types of content in secure system supported by policies and procedures to ensure sustainability. Needs to be scalable and extensible.Mix of spinning disc, tape and cloud storage – various requirements for data security
The Hydra Project - Hydra's ultimate objective is to produce a community-sourced, sustainable application framework that provides rich and robust repository-powered solutions as an integrated part of an overall digital content management architecture.
Trusted Digital Repository – holds all types of content in secure system supported by policies and procedures to ensure sustainability. Needs to be scalable and extensible.