DM2E Digital Humanities Advisory Board - Pundit, Ask and scholarly research p...Christian Morbidoni
This document summarizes a meeting presentation about Work Package 3 (WP3) of the DM2E project. WP3 focuses on building a scholarly research platform. The presentation outlines tasks completed so far, including initial specifications (D3.1), a prototype platform (D3.2), and learning materials (D3.3). Upcoming tasks include background research on scholarly primitives due in month 36 (D3.4). Core components of the platform are demonstrated, including Pundit for annotations, Ask for sharing annotations, and Feed for integrating Pundit into other applications. Examples are given of how the platform is being used by scholars annotating a Wittgenstein text and how annotations can be filtered, searched
The document presents a draft scholarly domain model that was presented at a DHAB meeting on June 15, 2012 by Stefan Gradmann and Steffen Hennicke. It includes sections on input areas, outputs, metadata, social context, zooming in on research, and a roadmap to stabilize the domain model, identify specializations, perform formalization and ontological modeling, and populate and test the model.
Pundit allows users to annotate and semantically structure web pages to enrich the Web of Data. It provides templates to make repeated annotations faster and automatic suggestions to extract and annotate entities from pages. The interface tour demonstrated these features and the basic layout of Pundit.
This document summarizes preliminary results from the Contextualization project, which is aiming to disambiguate and link open cultural heritage data. In the first year, several project partners contextualized persons and corporate bodies by identifying global identifiers from sources like GND, VIAF, and Wikidata. The document outlines the data sources and workflow used, and demonstrates the SILK linkage tool for matching entities across structured and unstructured data. It notes some limitations of the current approach and possibilities for improving the contextualization process going forward.
1) The document provides an update on key activities and data for WP1, including information on content providers and their contributions.
2) It notes some deviations from the original plans, such as EAJC acting as an aggregator rather than direct content provider.
3) Next steps include finalizing requirements gathering, holding an ingestion workshop, providing EDM trainings, and continued communication across work packages.
The document reports on Work Package 3 of the DM2E project, which involves building a prototype augmentation and collaboration platform. It provides an overview of the platform's components and goals. The platform encompasses tools developed in DM2E as well as third-party open source tools integrated to enable workflows for annotating, structuring, sharing, and exploring annotations. The core components include Pundit for annotation, Ask for storing domain vocabularies, and Linked Data APIs. Other open tools are used for demonstrations and educational materials.
1. The DM2E project aggregates metadata and content about digitized manuscripts from several European libraries and archives.
2. It develops an interoperability infrastructure using the Europeana Data Model and a DM2E extension to integrate heterogeneous metadata into a linked open data cloud.
3. The project also builds digital humanities applications like Pundit to showcase the usefulness of linked open data for research.
DM2E Digital Humanities Advisory Board - Pundit, Ask and scholarly research p...Christian Morbidoni
This document summarizes a meeting presentation about Work Package 3 (WP3) of the DM2E project. WP3 focuses on building a scholarly research platform. The presentation outlines tasks completed so far, including initial specifications (D3.1), a prototype platform (D3.2), and learning materials (D3.3). Upcoming tasks include background research on scholarly primitives due in month 36 (D3.4). Core components of the platform are demonstrated, including Pundit for annotations, Ask for sharing annotations, and Feed for integrating Pundit into other applications. Examples are given of how the platform is being used by scholars annotating a Wittgenstein text and how annotations can be filtered, searched
The document presents a draft scholarly domain model that was presented at a DHAB meeting on June 15, 2012 by Stefan Gradmann and Steffen Hennicke. It includes sections on input areas, outputs, metadata, social context, zooming in on research, and a roadmap to stabilize the domain model, identify specializations, perform formalization and ontological modeling, and populate and test the model.
Pundit allows users to annotate and semantically structure web pages to enrich the Web of Data. It provides templates to make repeated annotations faster and automatic suggestions to extract and annotate entities from pages. The interface tour demonstrated these features and the basic layout of Pundit.
This document summarizes preliminary results from the Contextualization project, which is aiming to disambiguate and link open cultural heritage data. In the first year, several project partners contextualized persons and corporate bodies by identifying global identifiers from sources like GND, VIAF, and Wikidata. The document outlines the data sources and workflow used, and demonstrates the SILK linkage tool for matching entities across structured and unstructured data. It notes some limitations of the current approach and possibilities for improving the contextualization process going forward.
1) The document provides an update on key activities and data for WP1, including information on content providers and their contributions.
2) It notes some deviations from the original plans, such as EAJC acting as an aggregator rather than direct content provider.
3) Next steps include finalizing requirements gathering, holding an ingestion workshop, providing EDM trainings, and continued communication across work packages.
The document reports on Work Package 3 of the DM2E project, which involves building a prototype augmentation and collaboration platform. It provides an overview of the platform's components and goals. The platform encompasses tools developed in DM2E as well as third-party open source tools integrated to enable workflows for annotating, structuring, sharing, and exploring annotations. The core components include Pundit for annotation, Ask for storing domain vocabularies, and Linked Data APIs. Other open tools are used for demonstrations and educational materials.
1. The DM2E project aggregates metadata and content about digitized manuscripts from several European libraries and archives.
2. It develops an interoperability infrastructure using the Europeana Data Model and a DM2E extension to integrate heterogeneous metadata into a linked open data cloud.
3. The project also builds digital humanities applications like Pundit to showcase the usefulness of linked open data for research.
This document summarizes the activities of Work Package 2 for an EU co-funded project. It discusses the development and evaluation of the project's data model, including iterative refinement based on validation testing and provider feedback. Key points include finalizing the 1.2 version of the model, removing unused classes and properties, and analyzing metadata usage patterns across different provider datasets. The work package also oversees infrastructure integration and provides access to ingested data through search and browse interfaces.
EDF2013: Selected Talk Josep-L. Larriba-Pey: The Linked Data Benchmark Counci...European Data Forum
Selected talk of Josep-L. Larriba-Pey, DAMA-UPC, Universitat Politècnica de Catalunya, BarcelonaTech, Director, at the European Data Forum 2013, 9 April 2013 in Dublin, Ireland: The Linked Data Benchmark Council, benchmarking RDF and Graph technologies.
UnifiedViews is a joint project currently maintained by Semantic Web Company (SWC) and Semantica.cz (Semantica.cz). It has been mainly developed by Charles University in Prague as a student project called ODCleanStore (version 2). It is based on the experience SWC obtained with the LOD Management Suite (LODMS) used in WP7 and ODCleansStore (version 1) developed by Charles University in Prague for the WP9a use case of the LOD2 FP7 project. In the next stack release of the LOD2 stack, UnifiedViews will replace LODMS as an ETL tool in the stack and the tool has already been adopted in other projects.
In the webinar we will give a brief overview of the UnifiedViews project (Helmut Nagy). The main part will be a presentation of the tool and it's capabilities (Tomas Knap)
Martin Donnelly from the Digital Curation Centre presented on recent updates and future plans for DMP Online, a web-based tool for creating data management plans. Key points included:
1. DMP Online allows users to create and update DMPs, meet funder requirements, and get guidance. It recently added abilities to share plans and overlay multiple templates.
2. A walkthrough showed the login process, plan creation, guidance features, template selection, sharing abilities, and export options.
3. DMP Online collaborates with funders, institutions, and research communities. It also works internationally with groups like DMPTool and ANDS in Australia.
4. Future plans include simplifying
DM2E Interoperability infrastructure (Kai Eckert – University of Mannheim) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Clipper Project Glasgow Caledonian University Library 2/11/15JC-Ed-Tech-COGC
Clipper is an online media analysis and collaboration tool developed by a partnership between City of Glasgow College, The Open University, and ReachWill Ltd. It allows users to create clips from audio and video files, annotate clips, organize clips into clip lists, and share playable clips and clip lists. The tool aims to facilitate analysis and collaboration for digital researchers working with time-based media while respecting access permissions and copyright. Feedback was gathered from participants on the prototype system and its potential applications and implications for data management, service development, and policy.
Populating DBpedia FR and using it for Extracting InformationJulien PLU
Julien Plu presented on populating DBpedia FR and using it for information extraction. He discussed mapping French Wikipedia infoboxes to DBpedia, how DBpedia FR is used at Orange, and a project called ExtSem for extracting relations from text. ExtSem uses natural language processing tools to parse text, build a dependency graph, extract and select RDF triples. Experiments processing magazine articles extracted over 2800 triples about celebrities and current events.
Open Source Community Metrics: LinuxCon BarcelonaDawn Foster
The best thing about open source projects is that you have all of your community data in the public at your fingertips. You just need to know how to gather the data about your open source community so that you can hack it all together to get something interesting that you can really use.
Information visualization of Twitter data for co-organizing conferencesJari Jussila
Information visualization of Twitter data for co-organizing conferences, introducing CMAD2013 case, presentation in Mindtrek conference, 3 October 2013, Tampere, Finland. Co-authors: Jukka Huhtamäki, Hannu Kärkkäinen and Kaisa Still. Joint research publication of two Tekes – the Finnish Funding Agency for Technology and Innovation – projects: SOILA (Innovative value creation and business models of social media in B2B networks) and REINO (Relational Capital for Innovative Growth Companies).
Clipper is an online media analysis and collaboration tool that allows researchers to analyze audio and video files. It allows users to create clips from media files, annotate clips, organize clips into clip lists, and share playable clips and clip lists through URLs. The document describes a demonstration of the Clipper prototype system and seeks feedback on its functionality and potential applications from research data management and policy perspectives.
Everyone wants (someone else) to do it: writing documentation for open source...Jody Garnett
Many people will cite how their adoption of software was based on the quality of documentation, and yet documentation can be one of the largest gaps in quality with an open source project. This talk will discuss why that is, what you (yes you) can do about it, and how the author has so far managed to avoid burnout by learning to accept less-than-perfect grammar.
A FOSS4G 2015 Presentation
The DM2E project aims to enable content providers to contribute manuscript data to Europeana. It develops an EDM application profile and mappings from various source formats like TEI and MARC to the DM2E data model. The DM2E model specializes concepts from EDM for the manuscript domain. It is documented in PDF, OWL, and online using Neologism. Mappings are created using Mint and XSLT. The resulting linked data is ingested into Europeana to enrich its collections.
The document discusses progress on tasks within Work Package 3 of the DM2E project. It provides an overview of the tasks, participating partners and their contributions, timeline, and status updates on Tasks 3.1 and 3.2. For Task 3.1 on specifying requirements for the prototyping platform, initial specifications are online but additional feedback is being gathered. For Task 3.2 on building the prototype, the Korbo and Pundit tools are demonstrated and available for testing.
How we built an open video conferencing service to help people stay connected during corona
You can watch the Youtube Recording here (german):
https://t.co/cg7bGKDOjB?amp=1
Tracing Summit 2014, Düsseldorf. What can Linux learn from DTrace: what went well, and what didn't go well, on its path to success? This talk will discuss not just the DTrace software, but lessons from the marketing and adoption of a system tracer, and an inside look at how DTrace was really deployed and used in production environments. It will also cover ongoing problems with DTrace, and how Linux may surpass them and continue to advance the field of system tracing. A world expert and core contributor to DTrace, Brendan now works at Netflix on Linux performance with the various Linux tracers (ftrace, perf_events, eBPF, SystemTap, ktap, sysdig, LTTng, and the DTrace Linux ports), and will summarize his experiences and suggestions for improvements. He has also been contributing to various tracers: recently promoting ftrace and perf_events adoption through articles and front-end scripts, and testing eBPF.
This document summarizes updates and future plans for the DMPRoadmap platform. Key points include:
1) DMPRoadmap and DMPTool are being merged into a single codebase to combine features while improving performance.
2) Upcoming improvements include foreign language support, ORCID authentication, and an easier process for creating data management plans.
3) The development team is working to improve administrative controls and user guides, and add new functionality like basic plan review by August.
4) Longer term priorities include developing common standards, utilizing persistent identifiers, and integrating DMPs more fully into research workflows and systems.
Reasoning with Reasoning, Semantic technologies for research in the humanities and social sciences (STRiX) Göteborg, 24 November 2014 Kristin Dill, Austrian National Library (ONB) Gerold Tschumpel, Steffen Hennicke, Christian Morbidoni, Klaus Thoden, Alois Pichler
More Related Content
Similar to DM2E DHAB meeting: WP3 Report Scholarly research platform
This document summarizes the activities of Work Package 2 for an EU co-funded project. It discusses the development and evaluation of the project's data model, including iterative refinement based on validation testing and provider feedback. Key points include finalizing the 1.2 version of the model, removing unused classes and properties, and analyzing metadata usage patterns across different provider datasets. The work package also oversees infrastructure integration and provides access to ingested data through search and browse interfaces.
EDF2013: Selected Talk Josep-L. Larriba-Pey: The Linked Data Benchmark Counci...European Data Forum
Selected talk of Josep-L. Larriba-Pey, DAMA-UPC, Universitat Politècnica de Catalunya, BarcelonaTech, Director, at the European Data Forum 2013, 9 April 2013 in Dublin, Ireland: The Linked Data Benchmark Council, benchmarking RDF and Graph technologies.
UnifiedViews is a joint project currently maintained by Semantic Web Company (SWC) and Semantica.cz (Semantica.cz). It has been mainly developed by Charles University in Prague as a student project called ODCleanStore (version 2). It is based on the experience SWC obtained with the LOD Management Suite (LODMS) used in WP7 and ODCleansStore (version 1) developed by Charles University in Prague for the WP9a use case of the LOD2 FP7 project. In the next stack release of the LOD2 stack, UnifiedViews will replace LODMS as an ETL tool in the stack and the tool has already been adopted in other projects.
In the webinar we will give a brief overview of the UnifiedViews project (Helmut Nagy). The main part will be a presentation of the tool and it's capabilities (Tomas Knap)
Martin Donnelly from the Digital Curation Centre presented on recent updates and future plans for DMP Online, a web-based tool for creating data management plans. Key points included:
1. DMP Online allows users to create and update DMPs, meet funder requirements, and get guidance. It recently added abilities to share plans and overlay multiple templates.
2. A walkthrough showed the login process, plan creation, guidance features, template selection, sharing abilities, and export options.
3. DMP Online collaborates with funders, institutions, and research communities. It also works internationally with groups like DMPTool and ANDS in Australia.
4. Future plans include simplifying
DM2E Interoperability infrastructure (Kai Eckert – University of Mannheim) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Clipper Project Glasgow Caledonian University Library 2/11/15JC-Ed-Tech-COGC
Clipper is an online media analysis and collaboration tool developed by a partnership between City of Glasgow College, The Open University, and ReachWill Ltd. It allows users to create clips from audio and video files, annotate clips, organize clips into clip lists, and share playable clips and clip lists. The tool aims to facilitate analysis and collaboration for digital researchers working with time-based media while respecting access permissions and copyright. Feedback was gathered from participants on the prototype system and its potential applications and implications for data management, service development, and policy.
Populating DBpedia FR and using it for Extracting InformationJulien PLU
Julien Plu presented on populating DBpedia FR and using it for information extraction. He discussed mapping French Wikipedia infoboxes to DBpedia, how DBpedia FR is used at Orange, and a project called ExtSem for extracting relations from text. ExtSem uses natural language processing tools to parse text, build a dependency graph, extract and select RDF triples. Experiments processing magazine articles extracted over 2800 triples about celebrities and current events.
Open Source Community Metrics: LinuxCon BarcelonaDawn Foster
The best thing about open source projects is that you have all of your community data in the public at your fingertips. You just need to know how to gather the data about your open source community so that you can hack it all together to get something interesting that you can really use.
Information visualization of Twitter data for co-organizing conferencesJari Jussila
Information visualization of Twitter data for co-organizing conferences, introducing CMAD2013 case, presentation in Mindtrek conference, 3 October 2013, Tampere, Finland. Co-authors: Jukka Huhtamäki, Hannu Kärkkäinen and Kaisa Still. Joint research publication of two Tekes – the Finnish Funding Agency for Technology and Innovation – projects: SOILA (Innovative value creation and business models of social media in B2B networks) and REINO (Relational Capital for Innovative Growth Companies).
Clipper is an online media analysis and collaboration tool that allows researchers to analyze audio and video files. It allows users to create clips from media files, annotate clips, organize clips into clip lists, and share playable clips and clip lists through URLs. The document describes a demonstration of the Clipper prototype system and seeks feedback on its functionality and potential applications from research data management and policy perspectives.
Everyone wants (someone else) to do it: writing documentation for open source...Jody Garnett
Many people will cite how their adoption of software was based on the quality of documentation, and yet documentation can be one of the largest gaps in quality with an open source project. This talk will discuss why that is, what you (yes you) can do about it, and how the author has so far managed to avoid burnout by learning to accept less-than-perfect grammar.
A FOSS4G 2015 Presentation
The DM2E project aims to enable content providers to contribute manuscript data to Europeana. It develops an EDM application profile and mappings from various source formats like TEI and MARC to the DM2E data model. The DM2E model specializes concepts from EDM for the manuscript domain. It is documented in PDF, OWL, and online using Neologism. Mappings are created using Mint and XSLT. The resulting linked data is ingested into Europeana to enrich its collections.
The document discusses progress on tasks within Work Package 3 of the DM2E project. It provides an overview of the tasks, participating partners and their contributions, timeline, and status updates on Tasks 3.1 and 3.2. For Task 3.1 on specifying requirements for the prototyping platform, initial specifications are online but additional feedback is being gathered. For Task 3.2 on building the prototype, the Korbo and Pundit tools are demonstrated and available for testing.
How we built an open video conferencing service to help people stay connected during corona
You can watch the Youtube Recording here (german):
https://t.co/cg7bGKDOjB?amp=1
Tracing Summit 2014, Düsseldorf. What can Linux learn from DTrace: what went well, and what didn't go well, on its path to success? This talk will discuss not just the DTrace software, but lessons from the marketing and adoption of a system tracer, and an inside look at how DTrace was really deployed and used in production environments. It will also cover ongoing problems with DTrace, and how Linux may surpass them and continue to advance the field of system tracing. A world expert and core contributor to DTrace, Brendan now works at Netflix on Linux performance with the various Linux tracers (ftrace, perf_events, eBPF, SystemTap, ktap, sysdig, LTTng, and the DTrace Linux ports), and will summarize his experiences and suggestions for improvements. He has also been contributing to various tracers: recently promoting ftrace and perf_events adoption through articles and front-end scripts, and testing eBPF.
This document summarizes updates and future plans for the DMPRoadmap platform. Key points include:
1) DMPRoadmap and DMPTool are being merged into a single codebase to combine features while improving performance.
2) Upcoming improvements include foreign language support, ORCID authentication, and an easier process for creating data management plans.
3) The development team is working to improve administrative controls and user guides, and add new functionality like basic plan review by August.
4) Longer term priorities include developing common standards, utilizing persistent identifiers, and integrating DMPs more fully into research workflows and systems.
Similar to DM2E DHAB meeting: WP3 Report Scholarly research platform (20)
Reasoning with Reasoning, Semantic technologies for research in the humanities and social sciences (STRiX) Göteborg, 24 November 2014 Kristin Dill, Austrian National Library (ONB) Gerold Tschumpel, Steffen Hennicke, Christian Morbidoni, Klaus Thoden, Alois Pichler
The document summarizes the tasks and results of Work Package 1 (WP1) of the DM2E project. Key points include:
- WP1 involved collecting metadata formats and requirements, testing interfaces for mapping and linking content, and setting up test scenarios for the prototype platform.
- Final content integration took longer than expected due to complex data modeling, issues mapping content, and Europeana's policy changes. Not all promised content was delivered.
- User testing found that interfaces were useful for basic tasks but complex work was done "under the hood". Guidelines were created to represent metadata and define annotatable content.
- While not all content goals were met, over 19 million pages were delivered, with
DM2E Community building (Lieke Ploeger – Open Knowledge) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Open Humanities Awards DM2E track: finderapp WITTfind (Maximilian Hadersbeck – LMU University of Munich) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Humanists and Linked Data (Steffen Hennicke – Humboldt Universität) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Open Humanities Awards Open track: SEA CHANGE (Rainer Simon – AIT Austrian Institute of Technology) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
DM2E Linked Data for Digital Scholars (with talks by Christian Morbidoni – Università Politecnica delle Marche / Net7, Steffen Hennicke – Humboldt Universität and Alessio Piccioli – Net7)
Open Humanities Awards Open track: Early Modern European Peace Treaties Online (Michael Piotrowski – IEG Leibniz Institute of European History) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
DM2E Content (Doron Goldfarb – ONB Austrian National Library) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Europeana and the relevance of the DM2E results (Antoine Isaac – Europeana) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Keynote : Beyond DM2E: towards sustainable digital services for humanities research communities in Europe? (Sally Chambers – DARIAH-EU, Göttingen Centre for Digital Humanities) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Welcome and short introduction to DM2E (Violeta Trkulja – Humboldt University) - Enabling humanities research in the Linked Open Web – DM2E final event
Susanne Müller, EUROCORR project: Burckhardtsource - Presentation given at DM2E event 'Putting Linked Library Data to Work: the DM2E Showcase' (18 Nov 2014, ONB, Vienna)
Marko Knepper, University Library Frankfurt am Main: From Library Data to Linked Open Data - Presentation given at DM2E event 'Putting Linked Library Data to Work: the DM2E Showcase' (18 Nov 2014, ONB, Vienna)
The document discusses a project called DM2E that is researching scholarly practices in the humanities and building digital humanities tools. It focuses on the Scholarly Domain Model (SDM) that DM2E is using to model the entities and relationships of the digital scholarship domain. The SDM identifies areas, primitives, activities, and operations of scholarly work. It also describes the Pundit suite of tools for annotating, linking, comparing, and visualizing scholarly sources that were developed based on the SDM.
Bernhard Haslhofer, AIT / Open Knowledge Austria and Lieke Ploeger, Open Knowledge: The value of open data and the OpenGLAM network - Presentation given at DM2E event 'Putting Linked Library Data to Work: the DM2E Showcase' (18 Nov 2014, ONB, Vienna)
The document discusses an evaluation of metadata usage and distribution in a linked data environment. It analyzes datasets from different institutions that mapped manuscript metadata to the Europeana Data Model (EDM) and a DM2E model. The evaluation aims to discover similarities and differences between datasets from different mapping institutions. It finds variations in usage of classes, properties, ontologies, and structural metrics like predicate-object-equality-ratio. The conclusion is that linked data quality assurance is important and people have a strong influence on metadata mapping.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
DM2E DHAB meeting: WP3 Report Scholarly research platform
1. co-‐funded
by
the
European
Union
WP3
Report
Scholarly
research
pla2orm
Chris&an
Morbidoni
(Net7)
giovedì 3 aprile 14
2. DM2E - DHAB meeting
Overall
DM2E
Architecture
2
WP
1
WP
2
WP
3
Scholarly
research
platform
YOU ARE HERE!
giovedì 3 aprile 14
3. DM2E - DHAB meeting
WP3
>meline
3
D3.1
Initial Specifications
M6
M11
D3.2 First version of
the prototype platform
D3.3
E-Learning Courses
M24
T3.1 Starting use cases and
requirements, including general
architecture of core components and
draft functional specs
T 3.2
• Prototype platform main components up and running
• Documentation, screencasts and demos online
T 3.3
• Documentation for users
• Documentation for developers
• Deployed and documented use cases
• Implementation of demonstrative apps
Wittgenstein Brown Book
experiment begins
M13
Reported period
D3.4
Research Report on DH
Scholarly Primitives
M36
T 3.4
•The scholarly domain model is under
refinement and informs the development
•Additional testing scenario is being set up
• D 3.4 due in M36
• Intermediate research report ready
(Milestone)
giovedì 3 aprile 14
4. DM2E - DHAB meeting
WP3
Tasks
and
Deliverables
• Task
T3.1
Ini7al
func7onal
specifica7ons
of
the
prototyping
pla<orms
– D
3.1
-‐
Specifica7ons
and
requirements
(DUE
AT
M6
-‐
DONE)
• Task
T3.2
Building
of
the
prototype
pla<orm
– D
3.2
-‐
Prototype
scholarly
research
pla<orm
(DUE
AT
M11
-‐
DONE)
• Task
T3.3
Tutorials,
Documenta7on
and
demonstra7ve
apps
(DUE
AT
M24
-‐
DONE)
– D
3.3
-‐
Learning
material
(DONE)
• Task
T3.4
Background
research
on
Scholarly
Primi7ves
– D
3.4
-‐
Research
Report
on
DH
Scholarly
Primi7ves
(DUE
AT
M36)
4
giovedì 3 aprile 14
5. DM2E - DHAB meeting
The
DM2E
scholarly
research
pla2orm
• Encompasses
different
soSware
components:
– Core
components:
developed
in
DM2E
– Third
party
open
source
tools:
in
combina7on
with
the
core
components
to
create
demonstra&ve
applica&ons
• For
scholars:
Enabling
the
crea7on
of
“workflows”
to:
– Augment
resources
of
interest
with
seman>cally
structured
annota>ons
• Including
but
not
limited
to
DM2E
digital
cultural
heritage
objects
(CHOs)
– Share
annota>ons
(or
keep
them
private)
– Find
annota>ons
made
by
other
scholars
and
exploit
them
(e.g.
ad
hoc
visualiza>ons)
5
giovedì 3 aprile 14
6. DM2E - DHAB meeting
The
DM2E
scholarly
research
pla2orm
• ..
and
for
developers
to
– Deploy
in
specific
environments
and
communi>es
• configure
user
interface
• add
your
own
annota4on
vocabulary
• distribute
as
a
bookmarklet
or
install
into
web
sites
– Build
on
top
of
the
REST
APIs:
• Provide
alterna7ve
ways
of
visualizing
annota4ons
and
data
• Harvest
public
annota7ons
and
data
to
power
digital
libraries
and
web
applica7ons
– Extend
the
code,
contribute...
• All
is
open
source!
• Pundit
and
Ask
can
be
forked
on
GitHub
6
giovedì 3 aprile 14
7. DM2E - DHAB meeting
Core
components
• Pundit
client,
the
annota7on
environment
• Pundit
server,
persis7ng
and
reusing
annota7ons
across
applica7ons
• Ask,
sharing,
discovering
and
analysing
public
annota7ons
• Feed,
connects
Pundit
client
as-‐a-‐service
to
exis7ng
applica7ons
• Korbo,
to
store
and
edit
Pundit
vocabularies
7
giovedì 3 aprile 14
8. DM2E - DHAB meeting
Core
components
-‐
1
8
Ask
(domain vocabularies store and API)
Annotations are RDF triples
Conforms to the
Open Annotation specs.
giovedì 3 aprile 14
9. DM2E - DHAB meeting
Core
components
-‐
2
9
Web application
(e.g. Primo faceted
search)
Feed
REST API to access Pundit:
parameters:
LINK to DATA = A representation of the object to be feed to
Pundit
CONF = A configuration for Pundit
giovedì 3 aprile 14
10. DM2E - DHAB meeting
Feed
+
Pundit
to
annotate
DM2E
content
10
giovedì 3 aprile 14
11. DM2E - DHAB meeting
Recent
achievements
(T3.2)
• Pundit
improvements:
– Edit
annota7ons
– See
and
repair
broken
annota7ons
– Vocabularies
UI
improved
• Ask:
from
early
alpha
to
1.0
– UI
improved
– Notebooks
search
and
sort
– Faceted
view
on
mul7ple
notebooks
– Direct
link
to
view
a
specific
annota7on
in
Pundit
• Feed
improvements
– On-‐the-‐fly
parsing
of
Linked
Data
to
produce
annotatable
representa7ons
– Specific
profile
for
EDM
and
DM2E
model
11
giovedì 3 aprile 14
12. DM2E - DHAB meeting
Learning
material:
D3.3
• Learning
materials
– Videos
and
screencast
– User
tutorials
• Guide
to
the
use
of
Pundit
and
the
other
tools
• Shows
how
Pundit
can
be
configured
and
deployed
for
a
specific
community
– Demonstra7ve
applica7ons
for
developers
• Visualiza7ons:
Edgemaps,
TimelineJS
demos
– GOAL:
demonstrates
the
different
ways
of
reusing
Pundit
RDF
annota7ons
to
create
specific
visualiza7ons
• Reusing
(DM2E)
Linked
Data
(from
WP2)
– Building
a
Solr
based
Faceted
Browser
– Parsing
and
reusing
RDF
data
– Standard
developers
documenta7on
online
(API,
Installa7on,
etc.)
12
giovedì 3 aprile 14
13. DM2E - DHAB meeting
Other
tools
used
for
demonstra>ons
and
learning
material
13
giovedì 3 aprile 14
14. DM2E - DHAB meeting
Demonstra>ve
applica>on
example:
DM2E
faceted
browser
14
Try it at:
http://metasound.dibet.univpm.it/dm2e/ajax-solr-master/examples/dm2e/
giovedì 3 aprile 14
18. DM2E - DHAB meeting
• Lets
see
real
annota7on
from
Wiggenstein
Scholars:
– Search
Wiggenstein
Brown
Book
in
the
faceted
browser
– Or
go
to:
hgp://feed.thepund.it/?dm2e=hgp%3A%2F
%2Fdata.dm2e.eu%2Fdata%2Fitem%2Fuib%2Fwab
%2FTs-‐310%2FTs-‐310%252C1%255B2%255Det2%255B1%25
5D&conf=dm2e.js
18
giovedì 3 aprile 14
19. DM2E - DHAB meeting 19
Possibly a lot of annotations....and from untrusted people...
giovedì 3 aprile 14
20. DM2E - DHAB meeting
Filter
out
unwanted
annotators
• Each
user
can
ac7vate/deac7vate
notebooks
• ...and
filter
only
ac7ve
(trusted)
ones
20
giovedì 3 aprile 14
21. DM2E - DHAB meeting
Focusing
only
on
what
is
important...
21
giovedì 3 aprile 14
22. DM2E - DHAB meeting
• What’s
next?
• What
we
do
with
annota7ons?
• First
of
all
we
can
share
them
as
public
notebooks
in
ASK:
– try
it
at
hgp://ask.as.thepund.it
22
giovedì 3 aprile 14
23. DM2E - DHAB meeting
Search
and
open
a
number
of
notebooks
...
23
giovedì 3 aprile 14
24. DM2E - DHAB meeting
Inspect
all
the
statements
made
by
your
colleagues
via
a
Faceted
Browser
24
giovedì 3 aprile 14
25. DM2E - DHAB meeting
Depending
on
the
informa>on
contained
in
your
notebook....
25
giovedì 3 aprile 14
26. DM2E - DHAB meeting
Specific
visualiza>ons
can
be
applied
26
See: http://metasound.dibet.univpm.it/timelinejs/examples/pundit.html?
notebook-ids=6290cd68
giovedì 3 aprile 14
27. DM2E - DHAB meeting 27
Prototype at : http://metasound.dibet.univpm.it/edgemaps/
maps/demo.html#letters;map;;Wilhelm+von+Bode;
giovedì 3 aprile 14
28. DM2E - DHAB meeting
Work
in
progress
...
28
Facets extracted from
DM2E data + Wittgenstein Ontology +
Scholars annotations
giovedì 3 aprile 14
29. DM2E - DHAB meeting
Next
steps
• Development
will
con7nue
for
the
core
components
(Ask,
Feed,
Pundit)
• Interes7ng
direc7ons
beyond
DM2E:
– Integrate
Pundit
in
WikiData
(joint
Google
Summer
with
WikiMedia
and
WikiSource
community)
– Possible
seman7c
augmenta7on
pilot
with
MarineLives
project
using
Pundit
– Possible
seman7c
augmenta7on
pilot
with
GramsciSource
project
using
Pundit
and
other
DM2E
technologies.
29
giovedì 3 aprile 14