Presentation given at Personal Digital Archiving on promising technologies demonstrated in research literature that may ultimately improve annotation and management of personal digital photo collections.
Building Web Archiving Technology, Togethernullhandle
This document discusses opportunities for the web archiving community to collaborate on building tools and standards together. It notes that most web archiving is currently done by a small number of large centralized organizations, but there is potential for more distributed and collaborative efforts. The document proposes several areas for collaboration, such as developing common APIs, modularizing archiving workflows into smaller components, creating community forums, and establishing shared standards and definitions. The goal is to strengthen web archiving by encouraging more participation and cooperation across different archives and organizations.
Lots of LOCKSS Keeping Stuff Safe: The Future of the LOCKSS Programnullhandle
The document discusses the future of the LOCKSS (Lots of Copies Keep Stuff Safe) program. It outlines plans to evolve the LOCKSS software and organizational structure to better support web archiving and distributed digital preservation. Key points include rearchitecting LOCKSS as a set of modular web services, expanding existing LOCKSS networks, and exploring how LOCKSS could play a greater role in distributed preservation beyond local institutions. The overall vision is to make LOCKSS technology more sustainable, scalable and accessible to diverse communities for long-term access to digital content.
This document discusses the need to measure various aspects of web archiving programs to better manage and assess them. It identifies several key metrics that could be measured, such as the volume of websites captured and preserved, usage of archived web content, costs associated with web archiving, and factors related to quality, buy-in, loss of content, and policy impacts. The document also notes challenges in measurement and capturing metrics that do not always have quantifiable measures.
Understanding Legal Use Cases for Web Archivesnullhandle
This document provides an overview of legal use cases for web archive evidence and discusses relevant considerations. It begins with examples of cases where web archive evidence from the Internet Archive's Wayback Machine has been used, such as for trademark or copyright infringement. It then examines authentication standards and cases related to authenticating web archive evidence through affidavits, judicial notice, or expert testimony. The document also discusses reliability factors courts have considered, such as the Wayback Machine disclaimer, issues of incompleteness, and temporal coherence of archived pages. Overall, it analyzes the legal context and precedent for how courts have assessed the evidentiary value of web archives.
Lots More LOCKSS for Web Archiving: Boons from the LOCKSS Software Re-Archite...nullhandle
The LOCKSS software is being re-architected to reduce costs, integrate components, and prepare for the evolving web. The new components include tools for bibliographic metadata extraction, publisher heuristics, discovery via metadata, format migration on access, and an audit and repair protocol. The roadmap includes Dockerization, improved access via OpenWayback, and format migration and search web services by the end of 2018. The goal is more community involvement through open development on GitHub.
This document discusses unlocking the LOCKSS system with APIs to make it more interoperable and enable integration with other digital preservation systems. It describes opportunities to integrate polling/repair functionality, repository replication, and access features through APIs. The goal is to reduce costs by leveraging open-source software, aligning with web archiving standards, and enabling external systems to interact with LOCKSS components through a web services architecture. This will help LOCKSS scale and evolve with changes on the web.
Interoperability and Technical Collaboration for Web and Social Media Archivingnullhandle
The document discusses interoperability and technical collaboration for web and social media archiving. It describes Heritrix, an archival crawler for web archiving, and newer approaches like headless browsers and archiving proxies that can execute JavaScript and support more capture tools. It also discusses leveraging APIs to reliably collect higher-fidelity social media data and aligning social media harvesting with web archiving. Key questions raised include how to build technical architectures and community frameworks to facilitate broad participation in web and social archiving, increase distributed capacity, and make archiving more inclusive.
Building Web Archiving Technology, Togethernullhandle
This document discusses opportunities for the web archiving community to collaborate on building tools and standards together. It notes that most web archiving is currently done by a small number of large centralized organizations, but there is potential for more distributed and collaborative efforts. The document proposes several areas for collaboration, such as developing common APIs, modularizing archiving workflows into smaller components, creating community forums, and establishing shared standards and definitions. The goal is to strengthen web archiving by encouraging more participation and cooperation across different archives and organizations.
Lots of LOCKSS Keeping Stuff Safe: The Future of the LOCKSS Programnullhandle
The document discusses the future of the LOCKSS (Lots of Copies Keep Stuff Safe) program. It outlines plans to evolve the LOCKSS software and organizational structure to better support web archiving and distributed digital preservation. Key points include rearchitecting LOCKSS as a set of modular web services, expanding existing LOCKSS networks, and exploring how LOCKSS could play a greater role in distributed preservation beyond local institutions. The overall vision is to make LOCKSS technology more sustainable, scalable and accessible to diverse communities for long-term access to digital content.
This document discusses the need to measure various aspects of web archiving programs to better manage and assess them. It identifies several key metrics that could be measured, such as the volume of websites captured and preserved, usage of archived web content, costs associated with web archiving, and factors related to quality, buy-in, loss of content, and policy impacts. The document also notes challenges in measurement and capturing metrics that do not always have quantifiable measures.
Understanding Legal Use Cases for Web Archivesnullhandle
This document provides an overview of legal use cases for web archive evidence and discusses relevant considerations. It begins with examples of cases where web archive evidence from the Internet Archive's Wayback Machine has been used, such as for trademark or copyright infringement. It then examines authentication standards and cases related to authenticating web archive evidence through affidavits, judicial notice, or expert testimony. The document also discusses reliability factors courts have considered, such as the Wayback Machine disclaimer, issues of incompleteness, and temporal coherence of archived pages. Overall, it analyzes the legal context and precedent for how courts have assessed the evidentiary value of web archives.
Lots More LOCKSS for Web Archiving: Boons from the LOCKSS Software Re-Archite...nullhandle
The LOCKSS software is being re-architected to reduce costs, integrate components, and prepare for the evolving web. The new components include tools for bibliographic metadata extraction, publisher heuristics, discovery via metadata, format migration on access, and an audit and repair protocol. The roadmap includes Dockerization, improved access via OpenWayback, and format migration and search web services by the end of 2018. The goal is more community involvement through open development on GitHub.
This document discusses unlocking the LOCKSS system with APIs to make it more interoperable and enable integration with other digital preservation systems. It describes opportunities to integrate polling/repair functionality, repository replication, and access features through APIs. The goal is to reduce costs by leveraging open-source software, aligning with web archiving standards, and enabling external systems to interact with LOCKSS components through a web services architecture. This will help LOCKSS scale and evolve with changes on the web.
Interoperability and Technical Collaboration for Web and Social Media Archivingnullhandle
The document discusses interoperability and technical collaboration for web and social media archiving. It describes Heritrix, an archival crawler for web archiving, and newer approaches like headless browsers and archiving proxies that can execute JavaScript and support more capture tools. It also discusses leveraging APIs to reliably collect higher-fidelity social media data and aligning social media harvesting with web archiving. Key questions raised include how to build technical architectures and community frameworks to facilitate broad participation in web and social archiving, increase distributed capacity, and make archiving more inclusive.
Rethinking Web Archiving Quality Assurance for Impact, Scalability, and Susta...nullhandle
Presentation for session 209, "Balancing Quality of Life and Quality Assurance: Best Practices and Tools for Web Archiving QA" at the 2016 Society of American Archivists Annual Meeting.
Collection Development for Selective Web Archivingnullhandle
The document discusses factors to consider when developing a collection policy for selective web archiving. It notes the large and growing amount of digital content and outlines challenges like limited resources. It recommends focusing on at-risk and unique third-party content that complements existing collections, is valuable to researchers, and addresses specific research needs. Observance of other archives' collection policies and access restrictions is also advised to avoid duplication of efforts and legal issues.
Why Not Lots of Copies Keep(ing) Software Safe?nullhandle
This document discusses how the LOCKSS (Lots of Copies Keeps Stuff Safe) system, which was originally developed to preserve web archives, could potentially play a role in software preservation. It describes how LOCKSS uses a distributed network of nodes run by different institutions to preserve content, provides examples of Private LOCKSS Networks and Controlled LOCKSS that were created for specific communities and content, and raises questions about how a similar model could work for software preservation.
The document provides an overview of the WASAPI project funded by IMLS to develop data transfer APIs between web archiving repositories. The project involves the Internet Archive, Stanford University, Rutgers University, and the University of North Texas working from 2016 to 2018 to build community, model preservation networks, and develop APIs for Archive-It and LOCKSS to standardize researcher access and exchange of archived data between service providers, repositories, and research workspaces.
Outreach to Campus Webmasters for a Better Web, and Better Web Archivingnullhandle
Presentation for the Society of American Archivists 2015 Annual Meeting, session 306: "Seeding Engagement: Web Archiving Outreach Strategies and Opportunities."
A Snapshot of the U.S. Web Archiving Landscape through the 2013 NDSA Survey R...nullhandle
The document summarizes key findings from a 2013 survey of web archiving programs in the United States. It found that most programs are run by universities, are relatively new, and use Archive-It. While programs have matured, concerns remain around growing data volumes, access, and fully capturing priority content types. Opportunities exist in collaborations and developing social media policies, but web archiving priorities and support require further development at many institutions.
Campaign Web Archives to Support Multi-Institutional Researchnullhandle
Presentation for the Society of American Archivists 2014 Annual Meeting, session 502: "Untangling the Web: Diverse Experiences with Access from the Web Archiving Trenches."
Boiling the Ocean, Together: Web Archive Collection Development in a Global C...nullhandle
This document summarizes Nicholas Taylor's presentation on web archive collection development in a global context. It discusses the distributed and selective nature of existing web archiving initiatives and collections. It also examines considerations for developing web archive collections, such as aligning with organizational missions, preserving at-risk content, and anticipating future research uses. Key questions are raised about maintaining awareness of what content already exists, developing collaborative projects, and creating policies and strategies for building unique and valuable web archive collections.
This document discusses how to build archivable websites that can be preserved by web archives. It recommends following web standards and accessibility guidelines, using stable URLs, semantic URLs, and limiting external assets. Tools for archiving websites include Heritrix, Wget, and HTTrack. The Internet Archive's Wayback Machine allows examining how a site appears in archives. The document encourages assessing a site's archivability using tools like Archive Ready.
From Seed to Harvest: Web Archiving Program Considerations for SULnullhandle
Presentation given at Stanford University Libraries as part of candidacy for the Web Archiving Service Manager position on web archiving program considerations and elements.
This document summarizes various tools for web archiving, including tools for capturing websites and individual web pages (HTTrack, Heritrix, Wget, WARCreate), replaying archived websites (Wayback Machine, MementoFox), managing workflows (Web Curator Tool, NetarchiveSuite, CINCH), hosted services (Archive-It, Web Archiving Service), and file utilities (HTTrack2Arc, warc-tools, WAT Utilities).
This document discusses the Wayback Machine, an open source tool used by many institutions to archive and provide access to historical web pages. It describes common limitations of web archives like missing elements from pages and errors with JavaScript. Workarounds are provided like disabling JavaScript. The document also provides strategies for finding pages missing from archives, such as using search engines to find historical URLs when a site URL has changed. It encourages involvement in identifying important websites to archive for future access.
The document discusses designing websites to be preservable by web archivists. It provides tips such as using durable data formats, stable URLs, metadata embedding, and following web standards to help archiving technologies fully capture and replay the site. The document recommends seeing how a site validates, looks when archived, and generating sitemaps as ways to check if it meets priorities of being fully capturable, having its experience replayable over time, and remaining coherent as archives are preserved.
Web and Twitter Archiving at the Library of Congressnullhandle
Presentation given at the Joint Conference on Digital Libraries (JCDL) Web Archive Globalization Workshop on web and social media archiving efforts at the Library of Congress.
Where We're Going: Non-Traditional Careers for LIS Graduatesnullhandle
Presentation given at the Federal Library Information Network (FLICC) Forum on the imperative for library and information science graduates to consider careers outside of "traditional" librarianship.
Usability Testing in Federal Libraries: A Case Studynullhandle
Presentation given to the Federal Library Information Network (FLICC) Emerging Technologies Working Group on improvised usability testing of a redesigned electronic resources access portal for the U.S. Supreme Court Library.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Rethinking Web Archiving Quality Assurance for Impact, Scalability, and Susta...nullhandle
Presentation for session 209, "Balancing Quality of Life and Quality Assurance: Best Practices and Tools for Web Archiving QA" at the 2016 Society of American Archivists Annual Meeting.
Collection Development for Selective Web Archivingnullhandle
The document discusses factors to consider when developing a collection policy for selective web archiving. It notes the large and growing amount of digital content and outlines challenges like limited resources. It recommends focusing on at-risk and unique third-party content that complements existing collections, is valuable to researchers, and addresses specific research needs. Observance of other archives' collection policies and access restrictions is also advised to avoid duplication of efforts and legal issues.
Why Not Lots of Copies Keep(ing) Software Safe?nullhandle
This document discusses how the LOCKSS (Lots of Copies Keeps Stuff Safe) system, which was originally developed to preserve web archives, could potentially play a role in software preservation. It describes how LOCKSS uses a distributed network of nodes run by different institutions to preserve content, provides examples of Private LOCKSS Networks and Controlled LOCKSS that were created for specific communities and content, and raises questions about how a similar model could work for software preservation.
The document provides an overview of the WASAPI project funded by IMLS to develop data transfer APIs between web archiving repositories. The project involves the Internet Archive, Stanford University, Rutgers University, and the University of North Texas working from 2016 to 2018 to build community, model preservation networks, and develop APIs for Archive-It and LOCKSS to standardize researcher access and exchange of archived data between service providers, repositories, and research workspaces.
Outreach to Campus Webmasters for a Better Web, and Better Web Archivingnullhandle
Presentation for the Society of American Archivists 2015 Annual Meeting, session 306: "Seeding Engagement: Web Archiving Outreach Strategies and Opportunities."
A Snapshot of the U.S. Web Archiving Landscape through the 2013 NDSA Survey R...nullhandle
The document summarizes key findings from a 2013 survey of web archiving programs in the United States. It found that most programs are run by universities, are relatively new, and use Archive-It. While programs have matured, concerns remain around growing data volumes, access, and fully capturing priority content types. Opportunities exist in collaborations and developing social media policies, but web archiving priorities and support require further development at many institutions.
Campaign Web Archives to Support Multi-Institutional Researchnullhandle
Presentation for the Society of American Archivists 2014 Annual Meeting, session 502: "Untangling the Web: Diverse Experiences with Access from the Web Archiving Trenches."
Boiling the Ocean, Together: Web Archive Collection Development in a Global C...nullhandle
This document summarizes Nicholas Taylor's presentation on web archive collection development in a global context. It discusses the distributed and selective nature of existing web archiving initiatives and collections. It also examines considerations for developing web archive collections, such as aligning with organizational missions, preserving at-risk content, and anticipating future research uses. Key questions are raised about maintaining awareness of what content already exists, developing collaborative projects, and creating policies and strategies for building unique and valuable web archive collections.
This document discusses how to build archivable websites that can be preserved by web archives. It recommends following web standards and accessibility guidelines, using stable URLs, semantic URLs, and limiting external assets. Tools for archiving websites include Heritrix, Wget, and HTTrack. The Internet Archive's Wayback Machine allows examining how a site appears in archives. The document encourages assessing a site's archivability using tools like Archive Ready.
From Seed to Harvest: Web Archiving Program Considerations for SULnullhandle
Presentation given at Stanford University Libraries as part of candidacy for the Web Archiving Service Manager position on web archiving program considerations and elements.
This document summarizes various tools for web archiving, including tools for capturing websites and individual web pages (HTTrack, Heritrix, Wget, WARCreate), replaying archived websites (Wayback Machine, MementoFox), managing workflows (Web Curator Tool, NetarchiveSuite, CINCH), hosted services (Archive-It, Web Archiving Service), and file utilities (HTTrack2Arc, warc-tools, WAT Utilities).
This document discusses the Wayback Machine, an open source tool used by many institutions to archive and provide access to historical web pages. It describes common limitations of web archives like missing elements from pages and errors with JavaScript. Workarounds are provided like disabling JavaScript. The document also provides strategies for finding pages missing from archives, such as using search engines to find historical URLs when a site URL has changed. It encourages involvement in identifying important websites to archive for future access.
The document discusses designing websites to be preservable by web archivists. It provides tips such as using durable data formats, stable URLs, metadata embedding, and following web standards to help archiving technologies fully capture and replay the site. The document recommends seeing how a site validates, looks when archived, and generating sitemaps as ways to check if it meets priorities of being fully capturable, having its experience replayable over time, and remaining coherent as archives are preserved.
Web and Twitter Archiving at the Library of Congressnullhandle
Presentation given at the Joint Conference on Digital Libraries (JCDL) Web Archive Globalization Workshop on web and social media archiving efforts at the Library of Congress.
Where We're Going: Non-Traditional Careers for LIS Graduatesnullhandle
Presentation given at the Federal Library Information Network (FLICC) Forum on the imperative for library and information science graduates to consider careers outside of "traditional" librarianship.
Usability Testing in Federal Libraries: A Case Studynullhandle
Presentation given to the Federal Library Information Network (FLICC) Emerging Technologies Working Group on improvised usability testing of a redesigned electronic resources access portal for the U.S. Supreme Court Library.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectors
A Survey of Research Prospects for more Manageable Personal Digital Photo Collections
1. A Survey of Research Prospects
for more Manageable Personal
Digital Photo Collections
Nicholas Taylor
@nullhandle
Personal Digital Archiving
February 21, 2013 “Photo Mosaic-Orange Daisy” by Flickr user BerniMartin under CC BY-ND 2.0
17. satellite and ground imagery
corroboration for geotagging
Grosse and Johnson: “Matching a photograph to satellite images”
18. identifying cities by
trivial visual elements
Doersch et al.: “What Makes Paris Look Like Paris?”
“Eiffel Tower” by Flickr user HarshLight under CC BY 2.0
19. geotagging and 3D scene construction
using large photo sets
Snavely, Seitz, and Szeliski: “Photo Tourism: Exploring Photo Collections in 3D”
21. automatic “event” identification by
temporal and visual clustering
Cooper et al.: “Temporal Event Clustering for Digital Photo Collections”
22. recognizing persons using
body patch matching
Suh and Bederson: “Semi-Automatic Photo Annotation Strategies Using
Event Based Clustering and Clothing Based Person Recognition”
Cooray et al.: “Identifying Person Re-Occurrences for Personal Photo Management Applications”
23. recognizing persons using
social context
Naaman et al.: “Leveraging Context to Resolve Identity in Photo Albums”
24. recognizing persons using
social network context
Stone et al.: “Autotagging Facebook: Social Network Context Improves Photo Annotation”
25. inferring photographer
based on height of shot
Farid: “Who Took That Picture? (Or at least, how tall was the photographer?)”
Photographic prints from different time periods have different, identifiable color signatures based on the distinct succession of chemical processing technologies, facilitating EXIF timestamp generation.
The orientation of digitized photos can be detected and automatically corrected.
Cities can be identified by far less than their most visually iconic landmarks. Photos from different European cities were successfully localized relying on seemingly trivial elements in the visual environment: lamp-posts, grating iron-work, door frames, etc.
EXIF timestamps and the visual content of photos can be used to organize them into probable “events.”
Extending recognition algorithms to examine other parts of the body aside from the face increases accuracy.
Knowing which individuals are commonly in photos with which other individuals informs person recognition.
Who an individual is connected to and interacts with on social networks can help narrow likely candidates for person annotation in photos.
The angel from which the photo was shot relative to a vanishing point and the ground plane can be used to infer the height of the camera, which may provide insight into who the photographer was.
Muse is a personal data mining application for e-mail that shows, among other things, the frequency of communications with different individuals and groups of people over time; perhaps something like it could be adapted for use with photos?