What tools and services are necessary to build an open linked library and how can we move existing digital library content into an open linked data model and use those tools to repurpose our own content?
The Biodiversity Heritage Library and bibliographic citations: towards new u...Trish Rose-Sandler
The data model and user interface for the Biodiversity Heritage Library (BHL) portal at http://www.biodiversitylibrary.org/ was originally designed to accommodate books and journals found in botanical garden libraries and natural history museums. As the size and reputation of the BHL grew, there were many publishers and individuals who wanted to contribute to the BHL but their content consisted of publication types at more granular levels, such as articles, book chapters, and dissertations. In order to ingest and serve these materials, in early 2011, BHL launched a separate portal called Citebank hosted at citebank.org. Currently, Citebank contains over 180,000 citations linked to content files, either hosted at citebank.org or hosted externally. While feedback on Citebank has been positive, users indicated a desire to combine both the services of the BHL portal and the services of the Citebank portal into a single interface in order to enable a unified search for all biodiversity literature. To respond to these needs, the BHL has begun expansion of its data model in the BHL portal to accommodate articles, book chapters, treatments and other segment-like material so that they can be searched alongside its traditional book and journal content. Parallel to this activity the NSF-funded Global Names Architecture (GNA) Project has enlisted Citebank to fulfill the role of a global biodiversity repository for bibliographic citations. In support of this, Citebank will provide a key functional component to the GNA - that of reconciliation services for citations. Once reconciled, citations can be linked either to scanned page images in the BHL, or to PDFs uploaded by users. If neither exists, citations can point to other digital representations online. Experience with Citebank has resulted in many lessons learned about working with diverse publication types; data formats; and contributors with varying levels of technical competencies. Those lessons were incorporated into a functional requirements document that is being used to inform development of the BHL data model. This talk will outline the functional requirements needed for a global citation repository for biodiversity and how those requirements will better serve the needs of the biodiversity community.
Crowdsourcing your cultural heritage collections: considerations when choosi...Trish Rose-Sandler
This talk was given at the Visual Resources Association conference March 13 2015. The moderator was Trish Rose-Sandler and speakers included: Robert Guralnick, Guarav Vaidya, and Trish Rose-Sandler. Notes from the talk are visible when downloaded.
Finding a goldmine of natural history illustrations within BHL texts: the Ar...Trish Rose-Sandler
The Biodiversity Heritage Library (BHL) has now achieved a critical mass of digitized historic texts – over 41 million pages and counting. The BHL portal can be searched by several access points including title, author, subject, and scientific name. But, what is largely hidden and entirely unsearchable are the millions of natural history illustrations found with the BHL books and journals. These visual resources which include drawings, paintings, photographs, maps and diagrams represent work by some of the finest botanical and zoological illustrators in the world, including the likes of John James Audubon, Georg Dionysus Ehret, and Pierre Redouté. Many of the illustrations are the first recorded descriptions of much of the world’s biota, providing the scientific foundation for contemporary taxonomic research and conservation assessments. Some of them are the only verifiable resource about an organism and their existence on Earth due to changes in global climate patterns and rapid loss of natural habitat for many species. Audiences for these illustrations also cross a variety of disciplines and include: biologists, artists, historians, illustrators, graphic designers, archivists, educators, students, and citizen scientists.
In 2012, the Missouri Botanical Garden was awarded a grant from the National Endowment for the Humanities to support a project called The Art of Life: Data Mining and Crowdsourcing the Identification and Description of Natural History Illustrations from the Biodiversity Heritage Library (BHL). This talk will discuss the Art of Life objectives and current status. It will go into detail about the algorithms and schema designed for finding which pages contain illustrations and describing the subsequent output. Finally the talk will discuss the project’s benefits for the scientific community such as improving access to a significant collection of public domain images related to biodiversity.
The history of biodiversity through words and picturesTrish Rose-Sandler
This talk was given as part of a conference called Curious Images held at the British Library Dec 18 2014 which brought together researchers and artists to share ideas, techniques and methods they have applied to image collections
This was a talk for the St Louis Chapter of Special Libraries Association about library-related projects going on in the Center for Biodiversity Informatics at Missouri Botanical Garden
The Biodiversity Heritage Library and bibliographic citations: towards new u...Trish Rose-Sandler
The data model and user interface for the Biodiversity Heritage Library (BHL) portal at http://www.biodiversitylibrary.org/ was originally designed to accommodate books and journals found in botanical garden libraries and natural history museums. As the size and reputation of the BHL grew, there were many publishers and individuals who wanted to contribute to the BHL but their content consisted of publication types at more granular levels, such as articles, book chapters, and dissertations. In order to ingest and serve these materials, in early 2011, BHL launched a separate portal called Citebank hosted at citebank.org. Currently, Citebank contains over 180,000 citations linked to content files, either hosted at citebank.org or hosted externally. While feedback on Citebank has been positive, users indicated a desire to combine both the services of the BHL portal and the services of the Citebank portal into a single interface in order to enable a unified search for all biodiversity literature. To respond to these needs, the BHL has begun expansion of its data model in the BHL portal to accommodate articles, book chapters, treatments and other segment-like material so that they can be searched alongside its traditional book and journal content. Parallel to this activity the NSF-funded Global Names Architecture (GNA) Project has enlisted Citebank to fulfill the role of a global biodiversity repository for bibliographic citations. In support of this, Citebank will provide a key functional component to the GNA - that of reconciliation services for citations. Once reconciled, citations can be linked either to scanned page images in the BHL, or to PDFs uploaded by users. If neither exists, citations can point to other digital representations online. Experience with Citebank has resulted in many lessons learned about working with diverse publication types; data formats; and contributors with varying levels of technical competencies. Those lessons were incorporated into a functional requirements document that is being used to inform development of the BHL data model. This talk will outline the functional requirements needed for a global citation repository for biodiversity and how those requirements will better serve the needs of the biodiversity community.
Crowdsourcing your cultural heritage collections: considerations when choosi...Trish Rose-Sandler
This talk was given at the Visual Resources Association conference March 13 2015. The moderator was Trish Rose-Sandler and speakers included: Robert Guralnick, Guarav Vaidya, and Trish Rose-Sandler. Notes from the talk are visible when downloaded.
Finding a goldmine of natural history illustrations within BHL texts: the Ar...Trish Rose-Sandler
The Biodiversity Heritage Library (BHL) has now achieved a critical mass of digitized historic texts – over 41 million pages and counting. The BHL portal can be searched by several access points including title, author, subject, and scientific name. But, what is largely hidden and entirely unsearchable are the millions of natural history illustrations found with the BHL books and journals. These visual resources which include drawings, paintings, photographs, maps and diagrams represent work by some of the finest botanical and zoological illustrators in the world, including the likes of John James Audubon, Georg Dionysus Ehret, and Pierre Redouté. Many of the illustrations are the first recorded descriptions of much of the world’s biota, providing the scientific foundation for contemporary taxonomic research and conservation assessments. Some of them are the only verifiable resource about an organism and their existence on Earth due to changes in global climate patterns and rapid loss of natural habitat for many species. Audiences for these illustrations also cross a variety of disciplines and include: biologists, artists, historians, illustrators, graphic designers, archivists, educators, students, and citizen scientists.
In 2012, the Missouri Botanical Garden was awarded a grant from the National Endowment for the Humanities to support a project called The Art of Life: Data Mining and Crowdsourcing the Identification and Description of Natural History Illustrations from the Biodiversity Heritage Library (BHL). This talk will discuss the Art of Life objectives and current status. It will go into detail about the algorithms and schema designed for finding which pages contain illustrations and describing the subsequent output. Finally the talk will discuss the project’s benefits for the scientific community such as improving access to a significant collection of public domain images related to biodiversity.
The history of biodiversity through words and picturesTrish Rose-Sandler
This talk was given as part of a conference called Curious Images held at the British Library Dec 18 2014 which brought together researchers and artists to share ideas, techniques and methods they have applied to image collections
This was a talk for the St Louis Chapter of Special Libraries Association about library-related projects going on in the Center for Biodiversity Informatics at Missouri Botanical Garden
Breathing new life into old data - How opening your collection can spark imag...Trish Rose-Sandler
This presentation was given by Doug Holland and Trish Rose-Sandler at the Missouri Libraries Association conference held in St Louis MO in Oct 2013. There is a significant online literature and image repository called the Biodiversity Heritage Library (BHL). Content from this repository has inspired a range of users to re-contextualize the BHL data in new, previously unimagined roles including: scientists creating visualizations of species names publishing; citizen scientists blogging about fascinating creatures; designers incorporating marine life into wedding invitations, artists creating collages of animal illustrations and nature photography ; and home decorators adding punch and wit to the walls of their kids bedrooms. Using the example of BHL and its open data principles, the presentation will discuss what open data is and how libraries can expand the impact and reach of their collections through open data methods.
Digitizing Entomology: The Biodiversity Heritage Library @ the SmithsonianMartin Kalfatovic
Digitizing Entomology: The Biodiversity Heritage Library @ the Smithsonian. Martin R. Kalfatovic. National Museum of Natural History, Department of Entomology Staff Meeting. Martin R. Kalfatovic. November 26, 2007. Washington, DC.
Keynote presented at the International Association of University Libraries Conference (IATUL), 20 June 2017 in Bolzano, Italy.
Library metadata was created to describe objects and enable a reader to understand when they had the same or a different object in hand. Now linked data concepts and techniques are allowing us to recreate, merge, and link our metadata assets in new ways that better support discovery - both in our local systems and on the wider web. Tennant described this migration and the potential it has for solving key discovery problems.
Nick Sheppard, Research Data Management Advisor, University of Leeds.
Talk at CILIP MmIT event, "The wisdom of the crowd? Crowdsourcing for information professionals", on 19/3/18 at the University of Huddersfield.
Current metadata landscape in the library world Getaneh AlemuGetaneh Alemu
This workshop was presented at MTSR-2017 (Nov. 27, 2017) in Tallinn, Estonia http://www.mtsr-conf.org/index.php/programme The workshop aims to bring the current metadata landscape in libraries in context, with particular emphasis on emerging theory/principles and best practices covering:
• The theory of enriching and filtering
• Metadata enriching through RDA (Hands on - The RDA Toolkit and implementation of RDA at Southampton Solent University)
• Metadata filtering through FRBR (practical issues that cataloguers face in FRBRising their catalogue)
• Metadata management (metadata quality, authority control and subject headings)
• Metadata systems, tools and applications (practical issues of e-books and database cataloguing)
This presentation was provided by Ashley Clark, Northeastern University, during a NISO Virtual Conference on the topic of data curation, held on Wednesday, August 31, 2016
Best Practices for Descriptive Metadata for Web ArchivingOCLC
Web archiving has become imperative to ensure that our digital heritage does not disappear forever, yet many institutions have not begun this work. In addition, archived websites are not easily discoverable, which severely limits their use. To address this challenge, OCLC Research has established the OCLC Research Library Partnership Web Archiving Metadata Working Group to develop a data dictionary that will be compatible with library and archives standards. Three reports on this project are available in late 2017, focused on metadata best practices guidelines, user needs and behaviors, and evaluation of web archiving tools.
Grameen America presentation outlining progress to date with their first operation in Jackson Heights, Queens, New York and what it take to have Grameen American bring a microcredit replication to your community. It appears the presentation was created mid 2009.
Breathing new life into old data - How opening your collection can spark imag...Trish Rose-Sandler
This presentation was given by Doug Holland and Trish Rose-Sandler at the Missouri Libraries Association conference held in St Louis MO in Oct 2013. There is a significant online literature and image repository called the Biodiversity Heritage Library (BHL). Content from this repository has inspired a range of users to re-contextualize the BHL data in new, previously unimagined roles including: scientists creating visualizations of species names publishing; citizen scientists blogging about fascinating creatures; designers incorporating marine life into wedding invitations, artists creating collages of animal illustrations and nature photography ; and home decorators adding punch and wit to the walls of their kids bedrooms. Using the example of BHL and its open data principles, the presentation will discuss what open data is and how libraries can expand the impact and reach of their collections through open data methods.
Digitizing Entomology: The Biodiversity Heritage Library @ the SmithsonianMartin Kalfatovic
Digitizing Entomology: The Biodiversity Heritage Library @ the Smithsonian. Martin R. Kalfatovic. National Museum of Natural History, Department of Entomology Staff Meeting. Martin R. Kalfatovic. November 26, 2007. Washington, DC.
Keynote presented at the International Association of University Libraries Conference (IATUL), 20 June 2017 in Bolzano, Italy.
Library metadata was created to describe objects and enable a reader to understand when they had the same or a different object in hand. Now linked data concepts and techniques are allowing us to recreate, merge, and link our metadata assets in new ways that better support discovery - both in our local systems and on the wider web. Tennant described this migration and the potential it has for solving key discovery problems.
Nick Sheppard, Research Data Management Advisor, University of Leeds.
Talk at CILIP MmIT event, "The wisdom of the crowd? Crowdsourcing for information professionals", on 19/3/18 at the University of Huddersfield.
Current metadata landscape in the library world Getaneh AlemuGetaneh Alemu
This workshop was presented at MTSR-2017 (Nov. 27, 2017) in Tallinn, Estonia http://www.mtsr-conf.org/index.php/programme The workshop aims to bring the current metadata landscape in libraries in context, with particular emphasis on emerging theory/principles and best practices covering:
• The theory of enriching and filtering
• Metadata enriching through RDA (Hands on - The RDA Toolkit and implementation of RDA at Southampton Solent University)
• Metadata filtering through FRBR (practical issues that cataloguers face in FRBRising their catalogue)
• Metadata management (metadata quality, authority control and subject headings)
• Metadata systems, tools and applications (practical issues of e-books and database cataloguing)
This presentation was provided by Ashley Clark, Northeastern University, during a NISO Virtual Conference on the topic of data curation, held on Wednesday, August 31, 2016
Best Practices for Descriptive Metadata for Web ArchivingOCLC
Web archiving has become imperative to ensure that our digital heritage does not disappear forever, yet many institutions have not begun this work. In addition, archived websites are not easily discoverable, which severely limits their use. To address this challenge, OCLC Research has established the OCLC Research Library Partnership Web Archiving Metadata Working Group to develop a data dictionary that will be compatible with library and archives standards. Three reports on this project are available in late 2017, focused on metadata best practices guidelines, user needs and behaviors, and evaluation of web archiving tools.
Grameen America presentation outlining progress to date with their first operation in Jackson Heights, Queens, New York and what it take to have Grameen American bring a microcredit replication to your community. It appears the presentation was created mid 2009.
Mifos is an open source information management system (MIS) purpose-built for the microfinance industry to help them more efficiently and effectively deliver financial services to the poor.
This presentation outlines the flexible data export options in Mifos that allow the MFI to import the necessary transactions and general ledger data to successfully integrate Mifos with their accounting system.
An overview of the functionality in Mifos, a centralized open source web-based management information system (MIS) platform purpose-built for the microfinance industry.
Innovative, client focused and sustainable, the BRAC microfinance programme is a critical component of our holistic approach to support livelihoods. Over the course of the last four decades, we have grown to become one of the world’s largest providers of financial services to the poor, providing tools which millions can use for the betterment of their lives.
Building an Open Source Application StrategyAcquia
David Cole from the Executive Office of the President of the United States presents "Beyond Websites, Building an Open Source Application Strategy" from the Drupal Business Summit in Washington DC
USING NEW CHANNELS TO EXPAND MICROFINANCE SEVICESMABSIV
Mr. Melvin Yu of Cantilan Bank shares their technological development to illustrate how far banking technology has come – and what its implications have been during the 2012 RBAP-MABS National Roundtable Conference on June 8.
RBAP Executive Director Vicente Mendoza shares how the MABS Program will continue and expand with RBAP and RBRDFI during the 2012 National Roundtable Conference on June 8.
Microfinance Forum 2008 (4.Scb MF and Role Of Investors Tokyo1108)Living in Peace
2008年11月28日に世界銀行東京ラーニングセンターで行われたマイクロファイナンス・フォーラムの資料です。
4.Scb MF and Role Of Investors Tokyo1108
スタンダード・チャータードがマイクロファイナンスに見出しているビジネス機会、投資パフォーマンスについて
Prashant Thakker 氏(Standard Chartered銀行 マイクロファイナンス グローバルビジネスヘッド)
※Living in Peace(リビング・イン・ピース)について
本フォーラムの主催団体であるLiving in
Peaceは、経済開発に関心のある金融機関関係者を中心に2008年10月に設立されました。その他にも公務員、国際機関関係者、学生などがメンバーになっており、2009年4月にNPO法人格を取得いたしました。また現在、ミュージックセキュリティーズと提携してマイクロファイナンス・ファンドの組成準備中です。(HP:http://www.living-in-peace.org/
旧Blog;http://d.hatena.ne.jp/microfinance/)
A followup on our 2011 presentation on the new Linked Open Digital Library, discussing how we are creating a digital library centered around LInked Open Data. Include details on how we are creating a dataset of botanists and their publications that is to be shared as linked open data.
https://doi.org/10.6084/m9.figshare.11854626.v1
Presented at Dutch National Librarian/Information Professianal Association annual conference 2011 - NVB2011
November 17, 2011
An introduction to the Joint Information Systems Committee Resource Discovery iKit. Includes a look at controlled vocabularies declared in the Resource Discovery Framework (RDF)/Simple Knowledge Organisation System (SKOS) and wikipedia entries. Presented by Tony Ross at the CILIPS Centenary Conference Branch and Group Day which took place 5 Jun 2008.
Using the conversion of Taxonomic Literature II as an example, I discuss in this high-level presentation some things to keep in mind while creating a linked open data set.
Also I present a few examples and links to LOD data sets and more information.
IFLA LIDASIG Open Session 2017: Introduction to Linked DataLars G. Svensson
At the IFLA Linked Data Special Interest Group open session in Wroclaw we briefly introduced the mission of the SIG and then went on to a brief introduction to what linked data is and why that topic is important to libraries.
The presentation was held jointly by Astrid Verheusen (general introduction to the SIG) and Lars G. Svensson (introduction to Linked Data)
Presentation at ELAG 2011, European Library Automation Group Conference, Prague, Czech Republic. 25th May 2011
http://elag2011.techlib.cz/en/815-lifting-the-lid-on-linked-data/
Presented at Industry Symposium, IFLA, 14 August 2008. Describes a new environment of global information services using metadata, taxonomies, and knowledge organization. Makes the case that these changes will permanently affect what it means "to catalog" materials for the purpose of connecting citizens, students and scholars to the information they need, when and where they need it.
Similar to Building the new open linked library: Theory and Practice (20)
Foundations to Actions: Extending Innovations to Digital Libraries in Partner...Trish Rose-Sandler
This talk was given by Trish Rose-Sandler, Leora Siegel, Katie Mika, Pamela McClanahan, Ariadne Rehbein, Marissa Kings, and Alicia Esquivel at the DPLAFest in Chicago on April 21 2017
Expanding access to natural history images: the BHL and its global consortiumTrish Rose-Sandler
Talk given at the 2016 IFLA conference. Part of the workshop called "Worth a Thousand Words: A Global Perspective on Image Description, Discovery, and Access"
The Art of Life: merging the worlds of art and scienceTrish Rose-Sandler
This talk is about the Art of Life project and was part of session 149 - SCIENCE+ART=CREATIVITY: Libraries and the New Collaborative Thinking at the IFLA conference in Lyon France in August 2014. Authors are: Trish Rose-Sandler, Nancy Gwinn, Constance Rinaldo. Accompanying paper is at http://library.ifla.org/681/1/149-rose-sandler-en.pdf
Revealing and Contextualizing the treasures of the Biodiversity Heritage Libr...Trish Rose-Sandler
This talk focused on two projects being carried out by the Missouri Botanical Garden related to the Biodiversity Heritage Library - Art of Life and Engelmann Correspondence. The Art of Life, funded by NEH, is a project to identify and describe the rich natural history illustrations hidden within the pages of BHL literature. The Engelmann Correspondence project, funded by IMLS, is a project to digitize and make available in BHL letters sent to 19th century botanist, George Engelmann by his colleagues in the US and Europe. Both projects are providing new content types to the BHL portal http://www.biodiversitylibrary.org/, helping contextualize its published literature, and expanding BHL audiences.
More than just a pretty picture: improving the discoverability of illustrati...Trish Rose-Sandler
This was a demo given by Trish Rose-Sandler and Kyle Jaebker at the Museums and the Web Conference on April 20th 2013 related to how BHL is improving access to its natural history illustrations via Flickr and via the Art of Life project. Authors for the poster and handouts include: Gilbert Borrego, Grace Costantino, Bianca Crowley, Kyle Jaebker, and Trish Rose-Sandler
Reach Out! Opportunities for the Visual Resource CenterTrish Rose-Sandler
The Art of Life Project and Biodiversity Heritage Library were featured in this session on Visual Resource Centers and how institutions are reaching new audiences for their content through collaboration and outreach
In spring of 2012 the National Endowment for the Humanities funded the Missouri Botanical Garden to embark on an ambitious project called The Art of Life. The project’s goals are to identify and describe natural history illustrations from the digitized books and journals in the online Biodiversity Heritage Library (BHL).
The BHL is a consortium of natural history and botanical libraries that cooperate to digitize and make accessible legacy literature held in their collections. The BHL portal now provides access to more than 110,000 volumes and 40 million pages of texts. Contained within these texts, but not easily accessible due to a lack of descriptive metadata, are millions of visual resources (plates , figures, maps, and photographs), many of which were produced by the finest botanical and zoological illustrators in the world, including the likes of John James Audubon, Georg Dionysus Ehret, and Pierre Redouté. Scholars and educators who rely heavily on visual resources in their research and teaching (e.g. biologists, art historians, curators, historians of science) will, for the first time, be able to find and view a wealth of illustrations of plant and animal life from which to make connections between science, art, culture, and history.
Nearly one year into the project, this presentation will discuss our objectives, progress, tools and technologies being utilized, and explain how the final deliverables will benefit all libraries.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Securing your Kubernetes cluster_ a step-by-step guide to success !
Building the new open linked library: Theory and Practice
1. Building the New OpenLinked Library Theory and Practice …and results! Keri Thompson, Joel Richard, Trish Rose-Sandler LITA National Forum, September 30, 2011
2.
3. 1.5 m volumes in collection, plus assorted archival collections
15. Linked Data “The Semantic Web isn’t just about putting data on the web. It is about making links, so that a person or machine can explore the web of data. With linked data, when you have some of it, you can find other, related, data.” Tim Berners-Lee, Linked Data – Design Issues Use URIs as names for things Use HTTP URIs so that people can look up those names When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL) Include links to other URIs, so that they can discover more things. LITA National Forum, September 30, 2011
22. Made available through various mechanisms such as .csv files, APIsURIhttp://library.si.edu/tl2/author/charles-darwin Predicateowl:sameAs Objecthttp://viaf.org/viaf/27063124 LITA National Forum, September 30, 2011
28. Digital Library Planning Analyze and categorize our current & future online content Create high-level data models for common content types Questions: Where are we metadata-rich? What do we have that others don’t? What is feasible right now? LITA National Forum, September 30, 2011
40. consume our data and others’ to create new aggregate websitesLITA National Forum, September 30, 2011
41. Linked Digital Library Planning Decide which data elements should be exposed as linked data for each content type Choose appropriate vocabularies Create a rough timeline and plan for migrating site content (=1 year*) * Optimism included in this estimate LITA National Forum, September 30, 2011
42. Linked Data in our Library Implement all this linked open data goodness (and a shiny new website) by moving to Drupal 7 LITA National Forum, September 30, 2011
70. TL-2 Page Sample http://library.si.edu/tl2/author/darwin RDF Type = foaf:Person foaf:lastName, foaf:familyName foaf:firstName, foaf:givenName foaf:name, skos:prefLabel tl2:birthYear tl2:deathYear tl2:description tl2:personAbbrev http://library.si.edu/tl2/book/1313 RDF Type = bibo:Book tl2:bookNumber dc:title event:place dc:publisher tl2:bookAbbreviation dc:created LITA National Forum, September 30, 2011
71. TL-2 Page Sample Results http://library.si.edu/tl2/author/darwin http://library.si.edu/tl2/book/1313 tl2:creatorOf “http://library.si.edu/tl2/book/1313” owl:sameAs “http://viaf.org/viaf/27063124” foaf:lastName “Darwin” foaf:familyName “Darwin” foaf:firstName “Charles” foaf:givenName “Charles” foaf:name “Darwin, Charles Robert” skos:prefLabel “Darwin, Charles Robert” tl2:birthYear “1809” tl2:deathYear “1882” tl2:description “British evolutionary biologist” tl2:personAbbrev “Darwin” dc:creator “http://library.si.edu/tl2/author/darwin” owl:sameAs ”http://www.archive.org/details/ originofspecies00darwuoft” tl2:bookNumber “1313” bibo:shortTitle “On the origin of species” dc:title “On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life.” event:place “London” dc:publisher “John Murray” dc:created “1859” tl2:bookAbbreviation “Origin sp.” LITA National Forum, September 30, 2011
90. Other Resources LinkedData.org http://linkeddata.org/guides-and-tutorials http://linkeddatabook.com/editions/1.0/ Drupal Groups http://groups.drupal.org/semantic-web http://groups.drupal.org/libraries Tim Berners-Lee, TED talks Tim Berners-Lee on the next Web (2009) The year open data went worldwide (2010) LITA National Forum, September 30, 2011
91. BHL is…. A consortium of 13 natural history and botanical libraries and research institutions An open access digital library for legacy biodiversity literature. An open data repository of taxonomic names and bibliographic information LITA National Forum, September 30, 2011
94. Benefits of open data Allows data which was created for a specific purpose and audience to interact with other data to serve new, previously unimagined roles.. LITA National Forum, September 30, 2011
95. What information have we opened up? Essentially, everything – our metadata (descriptive, rights, structural), our image files, scientific names, OCR’d files LITA National Forum, September 30, 2011
96. Technical methods for opening data Data exports APIs OpenURL OAI-PMH LITA National Forum, September 30, 2011
97. Who is reusing our data? Tropicos Rod Page – BioGUID, BioStor Encyclopedia of Life Ryan Schenk – Visualizing taxominic synonyms LITA National Forum, September 30, 2011
98. Who is reusing our data? Tropicos LITA National Forum, September 30, 2011
100. Who is reusing our data? Tropicos LITA National Forum, September 30, 2011
101. Who is reusing our data? Rod Page – BioGUID – http://bioguid.info/bhl/ LITA National Forum, September 30, 2011
102. Who is reusing our data? Rod Page – BioStor – http://biostor.org/ LITA National Forum, September 30, 2011
103. Who is reusing our data? Rod Page – BioStor – http://biostor.org/ LITA National Forum, September 30, 2011
104. Who is reusing our data? Encyclopedia of Life – http://eol.org/ LITA National Forum, September 30, 2011
105. Who is reusing our data? Encyclopedia of Life – http://eol.org/ LITA National Forum, September 30, 2011
106. Who is reusing our data? Encyclopedia of Life – http://eol.org/ LITA National Forum, September 30, 2011
107. Who is reusing our data? Ryan Schenk – http://ryanschenk.com/2011/02/visualizing-taxonomic-synoymns/ LITA National Forum, September 30, 2011
108. Making open data successful Promote it! LITA National Forum, September 30, 2011
109. Do a code challenge LITA National Forum, September 30, 2011
110. Publicly display your data’s copyright/licensing and API terms of service LITA National Forum, September 30, 2011
111. Thank You! Building the New Open Linked Library Keri Thompson, Head of Web Services Smithsonian Institution Libraries thompsonk@si.edu , @DigiKeri_SIL Joel Richard, Lead Developer Smithsonian Institution Libraries richardjm@si.edu Trish Rose-Sandler, Data Analyst Biodiversity Heritage Library trisha.rose-sandler@mobot.org LITA National Forum, September 30, 2011
Editor's Notes
Possibly omit or move this slide
Mo’ data mo’ better. Mission fulfilment. Sharing=caring. Efficient reuse of data.
Q to Audience: How many people have heard of linked data before today? How many feel they have a basic grasp of what it is? How many people want to watch me trip over my tongue trying to explain it in less than a minute?(If good grasp, note that in the 4 principles of linked data from T B-L 1 & 2 are easy, 3 is where we’re working now, and 4 we’re trying to figure out how to do it.) (otherwise on to definition)
LD describes a way of publishing structured data to the web so it can be interlinked with other structured data. Shared data usually (not always) in RDF (resource description framework), often as RDF in XML (we understand XML !) standard that allows data from different sources to be connected and queried.Linking data enables you to enrich yr data & give it additional contextData expressed almost like sentences in ‘triples’ URI=your data Predicate=verb Object=object. Example. Object can be a link to another system, or can just be more data, e.g. “1809”The predicate is chosen from a set vocabulary (or ontology) or if you have to make one up, you publish that new vocabulary on the web so others can get to it. Common vocabularies includeFOAF (Friend of a Friend) people, personal relationships DC (Dublin Core) publications, etc. etc. SKOS (Simple Knowledge Organization System) links systems, conceptsOWL (Web Ontology Language) links ontologies, extension of RDF
How did SIL start thinking about implementing LD? Website rebuild. Goal is to make our data more useful, reusable, and accessible to people and machines, more than just putting our stuff up ‘online’. Started looking at CMS. Wow! D7 is not only a CMS, it’s open source, and it has RDFa baked in, along with common LD ontologies! Sold!
-lots of bibliographic data in ILS, but unfortunately no access to it (for now)-re-doing online books, good candidates for providing lined biblio data-existing ‘database’ stuff – inventories, as well as new project digitization/markup of reference book Tax Lit 2 (more from JMR)
Initial focus for us will be on “database” like content we already have, or are currently creating. JMR will discuss one example.
As we move through our website redesign, and rearrange more of our content online, we will gradually go through books, other database stuff, maybe even simple stuff like library locations and hours, and apply our planning principles.
Questions for the audience to get a feel for who you are.* Computer* Librarians* Worked with Databases* Worked with Drupal
Why drupal? Why not!Why 7 and not 6? Well RDFa is built in. If it’s there, we’re more likely to use it.RDFx extends RDFa to provide different formats (XML, JSON,NTriples, Turtle via REST) RDFx also provides UI to se t the RDF mappings (Drupal comes with some already set up, but we really want to customize ours)Evoc used for caching also for autocomplete, which we’ll see later.AUDIENCE QUESTION: How many know the difference between RDF and RDFa?
Since we sort of know about what Linked Data is, let’s take a quick look at it compare RDFa, which embeds RDF data into the webpage, and and RDF in XML.The identifier is the URI of the page, the predicates are embedded in the page, and are displayed in orange, and the object or property is displayed in the <div> or <span>There may be more that needs to be done here
This RDF is formatted in XML, note that only the predicates are shown here, There is no extraneous HTML to distract. Typically you need a special tool to use this information. The web browser doesn’t natively understand an RDF XML file.
Field / Node Ref / Views are built inSPARQL is an addon module to allow others to come in and query the data on our siteSPARQL Views allow us to use external data from other sites, presumably to create new content (we may use this)RDF Ext Vocab (evoc) is used to cache vocabularies to use them in the autocomplete feature when setting up RDF mappings (among other things)Biblio is a nice module, but it needs a serious update before we can start using it.
Namespaces that drupal comes with:Dublin Core - FOAF – Friend of a friend – Links between people and the things they create and doOpen Graph – Allowing web pages to become an object in the social graph – Mainly facebookSIOC – Semantically interlinked Online CommunitiesSKOS – Knowledge organization – concepts, collections, ideasOWL – Web Ontology LanguageBIBO – Bibliogrpahic Ontology – For books! How convenient! Covers nearly all of what we need for describing books on the webWe may need to extend for publication year (rather than publication date)Later we’ll discuss a few cases where we aren’t finding something perfectly appropriate for our needs or our data is very specialized, so we may extend an existing namepsace or create our own. We can do this as long as the namespace is published and documented for others to reused.
Adding a namespace is a simple matter of giving it a prefix and the URI to the namespace. This page does not show all of the namespaces used by RDFx, there are actually 8 or 9 of them.Drupal can aslo import and cache these namespaces using the External Vocabulary Importer for reuse and also for the autocomplete feature, which is really nice. (not shown, but it’s also a matter of supplying the prefix and name.)
Although some very basic RDF mappings are set up in Drupal for us, it’s easy to create our own. They can be viewed in multiple places, but on the content type, each field’s RDF mappings can be edited on a single page. Additionally, if we have imported the vocabulary into Drupal, we get the nice benefit of the autocomplete feature to help us choose the appropriate mapping.
TL2 is a database. In book form!Botanists and their books, cross referenced in the index using unique identifiers across all volumes. It’s really a database!Used by botanists, having this online and searchable could be huge. At least having it online saves them the trouble of going to the physical volumes.Since no one else has this online in linked data form (in fact it’s barely online as it is) we’re going to become the authority for botanist names. Also, SI has contributed to the supplemental volumes.
Here we have a page of TL2, our good man Charles Darwin and some information about him. At the bottom, we have an obscure (ha!) book that he wrote, which is number 1313 in the TL2 scheme of things.Our goal is to identify the data elements that we are going to initially make public and how to map them to the vocabularies to make them more useful to others.This goes hand in hand with the parsing that we’ve hired a contractor to do, they’re pre-parsing some of the information based on our specs.1313. Nice address. 1313 Mockingbird Lane. Munsters Reference. Bad joke.
TODO: Link SameAs to BHL, not OCLCHere’s an example. The identifiers, /darwin and /1313 are linked together with “dc:creator” and in the reverse “dc:contributor” (I think)(Predicates are one-way)So these links, which come from the index of TL2, are cross linked and our site is nicely browseable and searchable and so on.But we also link out to other places, VIAF for darwin’s identifier and WorldCat for Origin of Species that allow others to go out and do other things with this data. We link out, but how do we get people to link back into us? That’s one of the questions we aim to get an answer to, but solving it will take some time.
And here’s what we’re going to start with. (run through the different elements, starting with URI, the RTF type, then the predicates and data types TODO: More info hereOther data elements may be linked later, there’s certainly stuff available here Herbaria other bibliographic entries (need to define their relationships) Handwriting Samples Postage Stamps (!!)Mentioned earlier that we might create our own or extend an existing vocabulary. You’ll note here that we are creating the “tl2” namepsace because the concepts in TL2 are specific to it and yet is commonly used that a new namespace would be useful to others.BUT! Something is missing! Where’s that “linked” part of linked data?
TODO: Link SameAs to BHL, not OCLCHere’s an example. The identifiers, /darwin and /1313 are linked together with “dc:creator” and in the reverse “dc:contributor” (I think)(Predicates are one-way)So these links, which come from the index of TL2, are cross linked and our site is nicely browseable and searchable and so on.But we also link out to other places, VIAF for darwin’s identifier and WorldCat for Origin of Species that allow others to go out and do other things with this data. We link out, but how do we get people to link back into us? That’s one of the questions we aim to get an answer to, but solving it will take some time.
So to recap, this entire dataset is initially going to be represented in exactly two content types, Authors and BooksA node reference between them allows us to browse between them in Drupal, but also helps create the RDF links for LOD
So how do we get this data into Drupal.We start with an XML file from our contractor. It’s already partially parsed, we’ll do some more parsing and convert that data into CSV, most likely.Using the Feeds module’s import tool, we’ll bring in the data and (hopefully) create the proper node references between. We need to keep the blocks of information together (herbaria, handwriting samples, bibliography, postage stamps) until we can parse them out at a later date as needed.Ultimately we’ll create a custom search just for TL2, even though its data will be included in the general site search on our Drupal site.
What things do we still need to do.RDFx (rdf extensions) module uses one set of identifiers and Drupal uses another. i.e. /node/22365 and /node/22356.rdf for the XML version versus /tl2/author/charles-darwinOther useful information in TL2 includes “See also” entries,Alternate names, etcUseful to researchers. We do plan to incorporate this data in a later phase of development, if only for the human-friendly site search.Investigating whether it makes sense to use SPARQL when users are querying our own data? Would this facilitate the search or make things more complicated.As we mentioned before, we’ll need to design, document and publish any extended or new ontologies (vocabularies) that we create for TL2. Our website’s been around for 15 years. Now we are laying the foundation for the next 15 years. Hopefully.