This document summarizes a presentation given at the NISO/BISG 7th Annual Forum on the Changing Standards Landscape. The presentation discusses the current bibliographic data ecosystem based on MARC standards and the movement toward linked data. It introduces NISO's Bibliographic Roadmap Initiative, which aims to identify gaps and engage stakeholders in developing a roadmap for the next generation of bibliographic data exchange to ensure interoperability, cost effectiveness, and adoption.
About the Webinar
The digitization of resources can provide expanded access to information as well as a preservation mechanism for now-fragile materials. Preserving the digital copy of the resource is an issue now being addressed, but what about the software used to create digital files? How can software on media which can no longer be read -- or no longer be read easily -- be preserved? If that software can’t be accessed, what happens to the material created by, and only read by, that software?
Progress has been made in formulating standards for the preservation and description of digital materials and a framework for addressing digital item preservation has been proposed. Despite, however, meetings such as the Library of Congress’ “Preserving.exe: Toward a National Strategy for Preserving Software,” no formal standard or framework yet exists for software digitization and preservation. This webinar will feature three presenters who will speak on aspects of software digitization and preservation, including a how-to approach (technical aspects), a metadata component, and observations from the field as part of the continuing discussion on the state of the field and the need for standardization.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Software artifacts: Migration and Emulation
Michael Lesk, Professor of Library and Information Science, Rutgers University
Emulation in practice: Emulation as a Service at Yale University Library: Lessons learnt and plans for the future
Euan Cochrane, Digital Preservation Manager, Yale University Library
No (You Can't Expect To Run Your Files Just Because You Saved Them)
Jon Ippolito, Professor of New Media and Director of the Digital Curation graduate program, University of Maine
This presentation was provided by Gerald Benoit of Simmons College during the NISO webinar, Enabling Discovery and Retrieval of Non-Traditional and Granular Content, held on June 7, 2017
About the Webinar
The digitization of resources can provide expanded access to information as well as a preservation mechanism for now-fragile materials. Preserving the digital copy of the resource is an issue now being addressed, but what about the software used to create digital files? How can software on media which can no longer be read -- or no longer be read easily -- be preserved? If that software can’t be accessed, what happens to the material created by, and only read by, that software?
Progress has been made in formulating standards for the preservation and description of digital materials and a framework for addressing digital item preservation has been proposed. Despite, however, meetings such as the Library of Congress’ “Preserving.exe: Toward a National Strategy for Preserving Software,” no formal standard or framework yet exists for software digitization and preservation. This webinar will feature three presenters who will speak on aspects of software digitization and preservation, including a how-to approach (technical aspects), a metadata component, and observations from the field as part of the continuing discussion on the state of the field and the need for standardization.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Software artifacts: Migration and Emulation
Michael Lesk, Professor of Library and Information Science, Rutgers University
Emulation in practice: Emulation as a Service at Yale University Library: Lessons learnt and plans for the future
Euan Cochrane, Digital Preservation Manager, Yale University Library
No (You Can't Expect To Run Your Files Just Because You Saved Them)
Jon Ippolito, Professor of New Media and Director of the Digital Curation graduate program, University of Maine
This presentation was provided by Gerald Benoit of Simmons College during the NISO webinar, Enabling Discovery and Retrieval of Non-Traditional and Granular Content, held on June 7, 2017
DBpedia Spotlight is a tool employed in the Extraction stage of the LOD Lyfe Cycle, performing Entity Recognition and Linking. Although the tool currently specializes in English language, the support for other languages is currently being tested, and demos for German, Dutch and others are available or underway. The tool can be used to enable faceted browsing, semantic search, among other applications. In this webinar we will describe what is DBpedia Spotlight, how it works and how can you benefit from it in your application.
If you are interested in Linked (Open) Data principles and mechanisms, LOD tools & services and concrete use cases that can be realised using LOD then join us in the free LOD2 webinar series!
http://lod2.eu/BlogPost/webinar-series
DBpedia Spotlight: a configurable annotation tool to support a variety of use cases. Given input text in English, we extract DBpedia Resources and generate annotations according to user-provided configuration parameters. These parameters can include score thresholds, entity types, and even arbitrary "type" definitions through SPARQL queries.
This is the presentation at the best paper award session at I-SEMANTICS 2011.
Linked Open Data Alignment and Enrichment Using Bootstrapping Based TechniquesPrateek Jain
The recent emergence of the “Linked Data” approach for publishing data represents a major step forward in realizing the original vision of a web that can “understand and satisfy the requests of people and machines to use the web content” – i.e. the Semantic Web. This new approach has resulted in the Linked Open Data (LOD) Cloud, which includes more than 70 large datasets contributed by experts belonging to diverse communities such as geography, entertainment, and life sciences. However, the current interlinks between datasets in the LOD Cloud – as we will illustrate – are too shallow to realize much of the benefits promised. If this limitation is left unaddressed, then the LOD Cloud will merely be more data that suffers from the same kinds of problems, which plague the Web of Documents, and hence the vision of the Semantic Web will fall short.
This thesis presents a comprehensive solution to address these issues using a bootstrapping based approach. It showcases using bootstrapping based methods to identify and create richer relationships between LOD datasets. The BLOOMS project (http://wiki.knoesis.org/index.php/BLOOMS) and the PLATO project, both built as part of this research, have provided evidence to the feasibility and the applicability of the solution.
The Second Life Library 2.0 project has a great potential, it has the momentum. It has HealthInfo Island to focus on Consumer Health Information, a Medical Library and a Health and Wellness Center.
This presentation shows what libraries are doing in Second Life
User research for the development of search systemsMaxKemman
Presentation at Erasmus University Library 11-12-2012.
For the most part a combination of slides from previous presentations, mostly from http://www.slideshare.net/MaxKemman/mapping-the-use-of-digital-sources-amongst-humanities-scholars-in-the-netherlands
Digital Preservation Best Practices: Lessons Learned From Across the PondBenoit Pauwels
Digital Preservation Best Practices: Lessons Learned From Across the Pond. Slavko Manojlovich (Associate University Librarian (IT) / Manager, Digital Archives Initiative Memorial University St Johns Canada) and Benoit Pauwels (Head, Library Automation Team, Université libre de Bruxelles Belgium)
Prateek Jain dissertation defense, Kno.e.sis, Wright State UniversityPrateek Jain
The recent emergence of the “Linked Data” approach for publishing data represents a major step forward in realizing the original vision of a web that can "understand and satisfy the requests of people and machines to use the web content" – i.e. the Semantic Web. This new approach has resulted in the Linked Open Data (LOD) Cloud, which includes more than 70 large datasets contributed by experts belonging to diverse communities such as geography, entertainment, and life sciences. However, the current interlinks between datasets in the LOD Cloud – as we will illustrate – are too shallow to realize much of the benefits promised. If this limitation is left unaddressed, then the LOD Cloud will merely be more data that suffers from the same kinds of problems, which plague the Web of Documents, and hence the vision of the Semantic Web will fall short.
This thesis presents a comprehensive solution to address the issue of alignment and relationship identification using a bootstrapping based approach. By alignment we mean the process of determining correspondences between classes and properties of ontologies. We identify subsumption, equivalence and part-of relationship between classes. The work identifies part-of relationship between instances. Between properties we will establish subsumption and equivalence relationship. By bootstrapping we mean the process of being able to utilize the information which is contained within the datasets for improving the data within them. The work showcases use of bootstrapping based methods to identify and create richer relationships between LOD datasets. The BLOOMS project (http://wiki.knoesis.org/index.php/BLOOMS) and the PLATO project, both built as part of this research, have provided evidence to the feasibility and the applicability of the solution.
What does success look like when it comes to library discoverability? Index based discovery systems have seen a dramatic rate of adoption since introduction to the research ecosystem in 2009, with more than 9,000 libraries relying on a discovery system to provide users with a comprehensive index to their offerings. Some issues bar the way to providing this comprehensive view, but many challenges have been overcome through collaboration between libraries, content providers and discovery partners. The NISO ODI initiative began to examine these issues in 2011, and released a best practice in June 2014.
Speakers will highlight examples of successful collaboration, note continued areas of challenge, and provide insight on how the Open Discovery Initiative Conformance Checklists can be used as a mechanism to evaluate content provider or discovery provider conformance with the best practice.
DBpedia Spotlight is a tool employed in the Extraction stage of the LOD Lyfe Cycle, performing Entity Recognition and Linking. Although the tool currently specializes in English language, the support for other languages is currently being tested, and demos for German, Dutch and others are available or underway. The tool can be used to enable faceted browsing, semantic search, among other applications. In this webinar we will describe what is DBpedia Spotlight, how it works and how can you benefit from it in your application.
If you are interested in Linked (Open) Data principles and mechanisms, LOD tools & services and concrete use cases that can be realised using LOD then join us in the free LOD2 webinar series!
http://lod2.eu/BlogPost/webinar-series
DBpedia Spotlight: a configurable annotation tool to support a variety of use cases. Given input text in English, we extract DBpedia Resources and generate annotations according to user-provided configuration parameters. These parameters can include score thresholds, entity types, and even arbitrary "type" definitions through SPARQL queries.
This is the presentation at the best paper award session at I-SEMANTICS 2011.
Linked Open Data Alignment and Enrichment Using Bootstrapping Based TechniquesPrateek Jain
The recent emergence of the “Linked Data” approach for publishing data represents a major step forward in realizing the original vision of a web that can “understand and satisfy the requests of people and machines to use the web content” – i.e. the Semantic Web. This new approach has resulted in the Linked Open Data (LOD) Cloud, which includes more than 70 large datasets contributed by experts belonging to diverse communities such as geography, entertainment, and life sciences. However, the current interlinks between datasets in the LOD Cloud – as we will illustrate – are too shallow to realize much of the benefits promised. If this limitation is left unaddressed, then the LOD Cloud will merely be more data that suffers from the same kinds of problems, which plague the Web of Documents, and hence the vision of the Semantic Web will fall short.
This thesis presents a comprehensive solution to address these issues using a bootstrapping based approach. It showcases using bootstrapping based methods to identify and create richer relationships between LOD datasets. The BLOOMS project (http://wiki.knoesis.org/index.php/BLOOMS) and the PLATO project, both built as part of this research, have provided evidence to the feasibility and the applicability of the solution.
The Second Life Library 2.0 project has a great potential, it has the momentum. It has HealthInfo Island to focus on Consumer Health Information, a Medical Library and a Health and Wellness Center.
This presentation shows what libraries are doing in Second Life
User research for the development of search systemsMaxKemman
Presentation at Erasmus University Library 11-12-2012.
For the most part a combination of slides from previous presentations, mostly from http://www.slideshare.net/MaxKemman/mapping-the-use-of-digital-sources-amongst-humanities-scholars-in-the-netherlands
Digital Preservation Best Practices: Lessons Learned From Across the PondBenoit Pauwels
Digital Preservation Best Practices: Lessons Learned From Across the Pond. Slavko Manojlovich (Associate University Librarian (IT) / Manager, Digital Archives Initiative Memorial University St Johns Canada) and Benoit Pauwels (Head, Library Automation Team, Université libre de Bruxelles Belgium)
Prateek Jain dissertation defense, Kno.e.sis, Wright State UniversityPrateek Jain
The recent emergence of the “Linked Data” approach for publishing data represents a major step forward in realizing the original vision of a web that can "understand and satisfy the requests of people and machines to use the web content" – i.e. the Semantic Web. This new approach has resulted in the Linked Open Data (LOD) Cloud, which includes more than 70 large datasets contributed by experts belonging to diverse communities such as geography, entertainment, and life sciences. However, the current interlinks between datasets in the LOD Cloud – as we will illustrate – are too shallow to realize much of the benefits promised. If this limitation is left unaddressed, then the LOD Cloud will merely be more data that suffers from the same kinds of problems, which plague the Web of Documents, and hence the vision of the Semantic Web will fall short.
This thesis presents a comprehensive solution to address the issue of alignment and relationship identification using a bootstrapping based approach. By alignment we mean the process of determining correspondences between classes and properties of ontologies. We identify subsumption, equivalence and part-of relationship between classes. The work identifies part-of relationship between instances. Between properties we will establish subsumption and equivalence relationship. By bootstrapping we mean the process of being able to utilize the information which is contained within the datasets for improving the data within them. The work showcases use of bootstrapping based methods to identify and create richer relationships between LOD datasets. The BLOOMS project (http://wiki.knoesis.org/index.php/BLOOMS) and the PLATO project, both built as part of this research, have provided evidence to the feasibility and the applicability of the solution.
What does success look like when it comes to library discoverability? Index based discovery systems have seen a dramatic rate of adoption since introduction to the research ecosystem in 2009, with more than 9,000 libraries relying on a discovery system to provide users with a comprehensive index to their offerings. Some issues bar the way to providing this comprehensive view, but many challenges have been overcome through collaboration between libraries, content providers and discovery partners. The NISO ODI initiative began to examine these issues in 2011, and released a best practice in June 2014.
Speakers will highlight examples of successful collaboration, note continued areas of challenge, and provide insight on how the Open Discovery Initiative Conformance Checklists can be used as a mechanism to evaluate content provider or discovery provider conformance with the best practice.
Creating better user interfaces for libraries catalogues: how to present and ...Tanja Merčun
Elag2013 slides and report for workshop "Creating better user interfaces for libraries catalogues: how to present and interact with (FRBR-based) bibliographic data?" by Tanja Merčun and Maja Žumer
About the Webinar
In the new models for describing information resources (FRBR, RDA, BIBFRAME), the conceptual essence of an item—referred to as a "Work"—is separated from the specific manifestations of the item—referred to as "Instances" or "Expressions". The work “Macbeth by Shakespeare” could have multiple forms or versions and exist in a variety of media, from a print copy of the play to a DVD of a live performance. Of equal importance in the new models is describing the relationship between a Work and its various Instances/Expressions.This represents an entirely different way of thinking about resource description for libraries and users.
While the new models are still in the early days of implementation, a number of efforts are already underway to describe resources using these new concepts and relationships. This webinar will explore how metadata descriptive systems are developing around the new notion of “Works”.
About the Webinar
The development and rising popularity of the massive open online course (MOOC) presents a new opportunity for libraries to be involved in the education of patrons, to highlight the resources libraries provide and to further demonstrate the value of the library to administrators. There are, of course, a host of logistics to be considered when deciding to organize or support a MOOC. Diminished library budgets and staffing levels challenge libraries both monetarily and administratively. Marketing the course, mounting it on a site, securing copyright permissions and negotiating licensing for course materials, managing the course while in progress and troubleshooting technical problems add to the issues that have caused some libraries to hesitate in joining the MOOC movement. On the other hand, partnerships such as that between Georgetown University and edX, itself an initiative of Harvard and MIT, allow a pooling of resources thereby easing the burden on any one library. In some cases price breaks for certain course materials used in MOOCs can help draw students to the course, though the pricing must still be negotiated by the course organizer. A successful MOOC, such as the RootsMOOC, created by the Z. Smith Reynolds Library at Wake Forest University and the State Library of North Carolina, can bring awareness of library resources to a broad audience.
In the end, libraries must ask whether the advantages of participating in a MOOC outweigh the challenges. The speakers for this webinar will consider these issues surrounding MOOCs and libraries and try to answer the question of whether the impact of libraries on MOOCs has been realized or is still brewing.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
MOOCS: Assessing the Landscape and Trends of Open Online Learning
Heather Ruland Staines, Director Publisher and Content Strategy, ProQuest SIPX
The RootsMOOC Project or: that time we threw a genealogy party and 4,000 people showed up
Kyle Denlinger, eLearning Librarian, Wake Forest University Z. Smith Reynolds Library
Rebecca Hyman, Reference and Outreach Librarian, Government and Heritage Library, State Library of North Carolina
MOOCS and Me: Georgetown's Experience with MOOC Production
Barrinton Baynes, Multimedia Projects Manager, Gelardin New Media Center, Georgetown University Library
NISO Two Day Virtual Conference:
Using the Web as an E-Content Distribution Platform:
Challenges and Opportunities
Oct 21-22, 2014
Frances Pinter, Founder and Executive Director, Knowledge Unlatched
INNOVATION AND RESEARCH (Digital Library Information Access)Libcorpio
Innovation and research, Digital Library Information Access, LIS Education, Library and Information Science, LIS Studies, Information Management, Education and Learning, Library science, Information science, Digital Libraries, Research on Digital Libraries, DL, Innovation in libraries and publishing, Areas of Research for DL, Information Discovery, Collection Management and Preservation, Interoperability, Economic, Social and Legal Issues, Core Topics In Digital Libraries, DL Research Around The World
DataCite – Bridging the gap and helping to find, access and reuse data – Herb...OpenAIRE
OpenAIRE Interoperability Workshop (8 Feb. 2013).
DataCite – Bridging the gap and helping to find, access and reuse data – Herbert Gruttemeier, INIST-CNRS
A North Carolina Connecting to Collections (C2C) workshop co-taught by Audra Eagle Yun (WFU), Nicholas Graham (UNC), and Lisa Gregory (State Archives of NC). This workshop took place on June 13, 2011 in Wilson, NC.
ExLibris National Library Meeting @ IFLA-Helsinki - Aug 15th 2012Lee Dirks
An invited talk to 40+ directors of national libraries worldwide at the annual ExLibris member meeting at IFLA (Helsinki, Finland) on August 15th, 2012.
The FP7 CODE project will be presented at the Big Data Benchmarking Community call. Here, a high-level overview shall introduce CODEs vision and show the progress after 6-months.
This deck talks about the basic overview of NoSQL technologies, implementation vendors/products, case studies, and some of the core implementation algorithms. The presentation also describes a quick overview of "Polyglot Persistency", "NewSQL" like emerging trends.
The deck is targeted to beginners who wants to get an overview of NoSQL databases.
"Infrastructure, relationships, trust, and RDA" presentation given by Mark Parsons, RDA Secretary General at the eInfrastructures & RDA for Data Intensive Science Workshop - held prior to the RDA 6th Plenary, Paris, 22 September 2015.
Similar to NISO BISG Forum: Bibliographic Roadmap (20)
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
This presentation was provided by William Mattingly of the Smithsonian Institution, during the closing segment of the NISO training series "AI & Prompt Design." Session Eight: Limitations and Potential Solutions, was held on May 23, 2024.
This presentation was provided by William Mattingly of the Smithsonian Institution, during the seventh segment of the NISO training series "AI & Prompt Design." Session 7: Open Source Language Models, was held on May 16, 2024.
This presentation was provided by William Mattingly of the Smithsonian Institution, during the sixth segment of the NISO training series "AI & Prompt Design." Session Six: Text Classification with LLMs, was held on May 9, 2024.
This presentation was provided by William Mattingly of the Smithsonian Institution, during the fifth segment of the NISO training series "AI & Prompt Design." Session Five: Named Entity Recognition with LLMs, was held on May 2, 2024.
This presentation was provided by William Mattingly of the Smithsonian Institution, during the fourth segment of the NISO training series "AI & Prompt Design." Session Four: Structured Data and Assistants, was held on April 25, 2024.
This presentation was provided by William Mattingly of the Smithsonian Institution, during the third segment of the NISO training series "AI & Prompt Design." Session Three: Beginning Conversations, was held on April 18, 2024.
This presentation was provided by Kaveh Bazargan of River Valley Technologies, during the NISO webinar "Sustainability in Publishing." The event was held April 17, 2024.
This presentation was provided by Dana Compton of the American Society of Civil Engineers (ASCE), during the NISO webinar "Sustainability in Publishing." The event was held April 17, 2024.
This presentation was provided by William Mattingly of the Smithsonian Institution, during the second segment of the NISO training series "AI & Prompt Design." Session Two: Large Language Models, was held on April 11, 2024.
This presentation was provided by Teresa Hazen of the University of Arizona, Geoff Morse of Northwestern University. and Ken Varnum of the University of Michigan, during the Spring ODI Conformance Statement Workshop for Libraries. This event was held on April 9, 2024
This presentation was provided by William Mattingly of the Smithsonian Institution, during the opening segment of the NISO training series "AI & Prompt Design." Session One: Introduction to Machine Learning, was held on April 4, 2024.
This presentation was provided by William Mattingly of the Smithsonian Institution, for the eight and final session of NISO's 2023 Training Series on Text and Data Mining. Session eight, "Building Data Driven Applications" was held on Thursday, December 7, 2023.
This presentation was provided by William Mattingly of the Smithsonian Institution, for the seventh session of NISO's 2023 Training Series on Text and Data Mining. Session seven, "Vector Databases and Semantic Searching" was held on Thursday, November 30, 2023.
This presentation was provided by William Mattingly of the Smithsonian Institution, for the sixth session of NISO's 2023 Training Series on Text and Data Mining. Session six, "Text Mining Techniques" was held on Thursday, November 16, 2023.
This presentation was provided by William Mattingly of the Smithsonian Institution, for the fifth session of NISO's 2023 Training Series on Text and Data Mining. Session five, "Text Processing for Library Data" was held on Thursday, November 9, 2023.
This presentation was provided by Todd Carpenter, Executive Director, during the NISO webinar on "Strategic Planning." The event was held virtually on November 8, 2023.
This presentation was provided by Rhonda Ross of CAS, a division of the American Chemical Society, and Jonathan Clark of the International DOI Foundation, during the NISO webinar on "Strategic Planning." The event was held virtually on November 8, 2023.
This presentation was provided by William Mattingly of the Smithsonian Institution, for the fourth session of NISO's 2023 Training Series on Text and Data Mining. Session four, "Data Mining Techniques" was held on Thursday, November 2, 2023.
More from National Information Standards Organization (NISO) (20)
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
PHP Frameworks: I want to break free (IPC Berlin 2024)
NISO BISG Forum: Bibliographic Roadmap
1. NISO/BISG 7th Annual Forum on
The Changing Standards Landscape
The E-Book Supply Chain: Latest Developments
from Libraries and Publishers
June 28, 2013 • Chicago, IL
8. • Non-profit industry trade association accredited by ANSI
with 150+ members
• Mission of developing and maintaining technical standards
related to information, documentation, discovery and
distribution of published materials and media
• Volunteer driven organization: 400+ spread out across
the world
• Represent US interests to ISO TC 46 & also serve as
Secretariat for ISO TC46/SC9 - Identification &
Description
• Responsible for standards like ISSN, DOI, Dublin Core
metadata, DAISY digital talking books, OpenURL, MARC
records, and ISBN (indirectly)
About
9. Actively participate internationally with ISO, EDItEUR,
IFLA, ICSTI, International STM Association, CODATA,
UK Serials Group, LIBER, IETF,W3C
ISO Registration Authorities
NISO Internationally
12. Whither Bibliographic Data?
Designing a roadmap
to a new
bibliographic information ecosystem
Todd A. Carpenter, Executive Director, NISO
7th
Annual BISG/NISO Changing Standards Landscape
June 28, 2013
13. Our Dear Old Friend, MARC
01386cam 2200301 a
450000100080000000500170000800800410002503500210006690600450008795500270013201000170015902000150
017604000180019104300120020905000220022108200210024311000550026424503080031926000670062730000250
069444000540071950000290077365000600080265000580086265000730092071000430099399100480103638568531
9951219150001.4881118s1989 nju 000 0 eng 9(DLC) 88029610 a7bcbccorignewd1eocipf19gy-gencatlg
aCIP ver. pv04 12-06-95 a 88029610 a0887389538 aDLCcDLCdDLC an-us---00aZ674.8b.N44
198900a021.6/5/09732192 aNational Information Standards Organization (U.S.)10aInformation retrieval service and
protocol :bAmerican national standard for information retrieval service definition and protocol specification for library
applications /capproved January 15, 1988 by American National Standards Institute ; developed by the National
Information Standards Organization. aNew Brunswick, N.J., U.S.A. :bTransaction Publishers,cc1989. axii, 50 p. ;c26
cm. 0aNational information standards series,x1041-5653 a"ANSI/NISO Z39.50-1988." 0aLibrary information
networksxStandardszUnited States. 0aComputer network protocolsxStandardszUnited States. 0aInformation storage
and retrieval systemsxStandardszUnited States.2 aAmerican National Standards Institute. bc-GenCollhZ674.8i.N44
1989tCopy 1wBOOKS
14. Our Dear Old Friend, MARC
(formatted for your viewing pleasure)
15. MARC Components
Encoding Structure
Z39.2
ISO 2709:2008 -- Format for information exchange
Format structure
Anglo-American Cataloging Rules (2nd Edition) AACR2
Resource Description & Access
Exchange System
Z39.50
SRU/SRW
25. If you were building a network today
would you string copper everywhere?
26. If you building a metadata ecosystem,
would you start here?
01386cam 2200301 a
450000100080000000500170000800800410002503500210006690600450008795500270013201000170015902000150
017604000180019104300120020905000220022108200210024311000550026424503080031926000670062730000250
069444000540071950000290077365000600080265000580086265000730092071000430099399100480103638568531
9951219150001.4881118s1989 nju 000 0 eng 9(DLC) 88029610 a7bcbccorignewd1eocipf19gy-gencatlg
aCIP ver. pv04 12-06-95 a 88029610 a0887389538 aDLCcDLCdDLC an-us---00aZ674.8b.N44
198900a021.6/5/09732192 aNational Information Standards Organization (U.S.)10aInformation retrieval service and
protocol :bAmerican national standard for information retrieval service definition and protocol specification for library
applications /capproved January 15, 1988 by American National Standards Institute ; developed by the National
Information Standards Organization. aNew Brunswick, N.J., U.S.A. :bTransaction Publishers,cc1989. axii, 50 p. ;c26
cm. 0aNational information standards series,x1041-5653 a"ANSI/NISO Z39.50-1988." 0aLibrary information
networksxStandardszUnited States. 0aComputer network protocolsxStandardszUnited States. 0aInformation storage
and retrieval systemsxStandardszUnited States.2 aAmerican National Standards Institute. bc-GenCollhZ674.8i.N44
1989tCopy 1wBOOKS
29. MARC is useful.
It is efficient.
It is our lingua franca.
There are many reasons to
retain it.
But wait.....
30.
31.
32. Movement toward linked data
datahub.io - 5107 data stores
id.loc.gov
British National Bibliography (BNB)
VIAF
OCLC WorldCat Linked Data Store
Deutsche Nationalbibliografie (DNB) (Germany)
datos.bne.es (Spain)
W3C Library Linked Data Incubator Group
Many, many more...
34. Organizations will not move away from a legacy system
unless the new system:
a) Is demonstrably cheaper
b) Is demonstrably more effective in producing results
(discovery, use, etc.)
c) Will make the organization demonstrably more efficient
(staff, management, sales, etc.)
OR
d) The legacy system becomes entirely
non-interoperable with other, more important systems
OR
e) The legacy system breaks and cannot be repaired
35. Can we say a new
metadata management system
based on linked data
will be/do one of those things?
36.
37. It is in….
Adoption
(or rather, in its
absence)
The point at which most standards
fail is not prior to consensus
38. “You would be a fool to
design a system based
on an interchange
protocol.”
- Mark Bide, EDItEUR
45. What have we done?
In-person meeting on April 15-16
in Baltimore
An unconference on bibliographic data exchange
45 in-person
more than 40 more online
more than 200 subsequent viewers
47. The world
makes way for
the man who
knows where
he is going.
- Ralph Waldo Emerson
48. “If you don't know where
you're going, you might not
get there.”
-Yogi Berra
49. More Detail & Discussion
NISO Roadmap initiative
Monday 1:00 pm
MCP - Room N227a
50. Thank you!
Todd Carpenter, Executive Director
tcarpenter@niso.org
National Information Standards Organization (NISO)
3600 Clipper Mill Road, Suite 302
Baltimore, MD 21211 USA
+1 (301) 654-2512
www.niso.org
Editor's Notes
For those of you used to speaking in angle brackets, you ’ ll notice that there isn ’ t one. Frightening? Z39.50
Z39.50
Data element set (MARC fields and tags) identifies and characterizes the specific pieces of data within a record to support its use and manipulation. Data is primarily defined outside of the format, both through content standards or general rule sets (e.g., AACR2, RDA, and others)
MARC was created in the mid1960s by Henriette Avram at Library of Congress to create these things? Anyone remember these things?
Why was MARC so efficient, it had to be. 1 KB * 290,000,000,000 = 290,000 MB Assuming there is something like 290 Billion MARC records or about 290 GB worth of raw MARC data in the world, that would equate to some $766 billion dollars of storage space in 1965. Today, I could go out today and buy more hard disk storage than would be necessary to store all of the library of congresses collections--not just its catalogue, but everything it holds--for about $2,500.
FORTRAN - released in 1957 by IBM COBOL - Drafted by (among others) Grace Hopper (pictured) in 1959. ASCII - First released in 1963 GPS - public release in 1967, but used by NAVY in 1963 First Internet Node at UCLA - 1968 Hypertext - 1968 - by Douglas Englebert
LinkedIn = W2K bug = It was a feature for long-term job security video By Comparison (GROUPS unless otherwise noted), FORTRAN programmers, 2,095; MARC21(skill), 2,100; XML professionals, 4,140; C++ developers, 14,600; iOS developers 37,000; Java Developers 156,000;
Metadata - the legacy infrastructure problem Far too much of our infrastructure was implemented as the systems were first developed. If you were connecting up a world of telephones today, would you use wires? Knowing what we know today, would we build a metadata ecosystem in the same way? The problem is we have more than a billion MARC records. Nearly every library around the world, from the smallest school library, to the largest national library and every size, shape and type of library in-between has a system built upon MARC. Old infrastructure isn't improved - it is maintained. Or it is replaced by something wholly new and a multiple factor more efficient. How do you assess the value of the opportunity costs of not dosing something? How do you measure the lost sales of undiscovered books? How do you compare that potential value against the real costs of improving your out-dated management systems that are "good enough"? How do you measure that potential, without investing today in the system of the future? Ebooks provide the community the best opportunity to get around the mistakes of the past. What are the infrastructure needs of an ebook world that make it inherently different from a print world? Unfortuantely, too much of our current thinking is either tied up in 1) get it out the door as quickly as possible (beta-shipping) or 2) replicating our old models and mistakes.
Old infrastructure isn't improved - it is maintained. Or it is replaced by something wholly new and a multiple factor more efficient.
There are
WorldCat facts and statistics72,000+ libraries from 170 countries 1.95 Billion holdings 289,963,654 bibliographic records
Here is just a partial look at how messy that world really is. Although we haven ’ t studied it, I know it is even more complicated when you begin adding data from the other print media, the recording industry, the television and movie industries. That data environment necessary to describe that information discovery flow is massive, complex and labyrinthine. I would venture to guess it is also horribly inefficient, fraught with duplication, and to a large extent not interoperable.
In 2009, NISO commissioned a study of the exchange environment of book data. It is not, surprisingly very, very messy. Since this particular project was focused on the exchange of MARC and ONIX data, other metadata communities are not described here, but they are equally relevant and equally challenged in interoperability terms
Is the semantic web the way to go? I give it a full-throated “ Possibly ” .
NISO received a modest amount of funding from the Andrew W. Mellon Foundation in October to launch an initiative to draw together a roadmap to help move us toward an environment that
Ralph Waldo Emerson, “ The world makes way for the man who knows where he is going. ” Unfortunately, the corollary quote by Yogi Berra is also equally true, perhaps more so: “ If you don't know where you're going, you might not get there. ”