These slides were presented during the Microinsurance Innovation Facility’s second webinar, co-organized with the Microinsurance Network, and held on July 13th, 2011. It focused on "New Frontiers in Microinsurance Distribution", their strengths and weaknesses, and areas that can be explored to make these channels work more effectively from the insurer and client perspective. This webinar follow a recent paper on alternative channels prepared by Cenfri (http://www.ilo.org/public/english/employment/mifacility/download/brnote7_en.pdf). Presenters were Brandon Mathews from the Zurich Financial Services, Anja Smith from Cenfri, and Pranav Prashad from the Facility, with Jasmin Suministrado of the Facility as moderator.
EXPOZIŢIE DE PICTURĂ Promoţia 2009 Vernisaj – Vineri, 02.08.2013, ora 18,o...Emanuel Pope
EXPOZIŢIE DE PICTURĂ
Promoţia 2009
Vernisaj – Vineri, 02.08.2013, ora 18,oo, Sala Milleniului
A prezentat: Conf.Univ. BERTALAN KOVACS
A fost prezent Primarul municipiului Baia Mare, CĂTĂLIN CHERECHEŞ
Din partea CENTRULUI DE EXCELENŢĂ ÎN PROMOVAREA CREATIVITĂŢII ROMÂNEŞTI „PORŢILE NORDULUI” BAIA MARE
Au fost prezenţi MIHAI GANEA şi VIRGINIA PARASCHIV, coordonatori principali.
Sherborn: Pilsk, Joel Richard & Kalfatovic - Unlocking the Index Animalium: F...ICZN
Smithsonian Institution Libraries received funding in 2004 to digitize Sherborn’s Index Animalium. The initial project was to digitize the pages images and re-key the data into a simple data structure. As the project evolved, a more complex database was developed to enable quality searching to retrieve species names and to search the bibliography. The OCRed, scanned Index Animalium was re-keyed to the specifications of 99.995% accuracy rate. Working off the lessons learned by MBL WHOI Library’s project for Neave’s Nomenclator Zoologicus, simple expressions were used to break apart the re-keyed text. Coinciding with the development of the Biodiversity Heritage Library (2005), it became obvious there was a need to integrate the scanned Index Animalium, BHL’s scanned taxonomic literature, and taxonomic intelligence. The challenges of working with legacy taxonomic citation, computer matching algorithms, and making connections have brought us to today’s goal of making Sherborn available as open linked data. The goal is to allow repurposing of data, partnering with others to allow machine-to-machine communications and sharing information for broad discovery and access.
Craig Churchill presents the main trends of microinsurance, give some example of innovations in the sector and highlight the common mistakes the different players make when starting a scheme.
Chris Lyal - Taxonomy and the Web - integrating the piecesICZN
More and more calls for information about species
What is this?
What species live in my country / national park?
What species are eating my crops?
What happens to them if I manage the environment?
These slides were presented during the Microinsurance Innovation Facility’s second webinar, co-organized with the Microinsurance Network, and held on July 13th, 2011. It focused on "New Frontiers in Microinsurance Distribution", their strengths and weaknesses, and areas that can be explored to make these channels work more effectively from the insurer and client perspective. This webinar follow a recent paper on alternative channels prepared by Cenfri (http://www.ilo.org/public/english/employment/mifacility/download/brnote7_en.pdf). Presenters were Brandon Mathews from the Zurich Financial Services, Anja Smith from Cenfri, and Pranav Prashad from the Facility, with Jasmin Suministrado of the Facility as moderator.
EXPOZIŢIE DE PICTURĂ Promoţia 2009 Vernisaj – Vineri, 02.08.2013, ora 18,o...Emanuel Pope
EXPOZIŢIE DE PICTURĂ
Promoţia 2009
Vernisaj – Vineri, 02.08.2013, ora 18,oo, Sala Milleniului
A prezentat: Conf.Univ. BERTALAN KOVACS
A fost prezent Primarul municipiului Baia Mare, CĂTĂLIN CHERECHEŞ
Din partea CENTRULUI DE EXCELENŢĂ ÎN PROMOVAREA CREATIVITĂŢII ROMÂNEŞTI „PORŢILE NORDULUI” BAIA MARE
Au fost prezenţi MIHAI GANEA şi VIRGINIA PARASCHIV, coordonatori principali.
Sherborn: Pilsk, Joel Richard & Kalfatovic - Unlocking the Index Animalium: F...ICZN
Smithsonian Institution Libraries received funding in 2004 to digitize Sherborn’s Index Animalium. The initial project was to digitize the pages images and re-key the data into a simple data structure. As the project evolved, a more complex database was developed to enable quality searching to retrieve species names and to search the bibliography. The OCRed, scanned Index Animalium was re-keyed to the specifications of 99.995% accuracy rate. Working off the lessons learned by MBL WHOI Library’s project for Neave’s Nomenclator Zoologicus, simple expressions were used to break apart the re-keyed text. Coinciding with the development of the Biodiversity Heritage Library (2005), it became obvious there was a need to integrate the scanned Index Animalium, BHL’s scanned taxonomic literature, and taxonomic intelligence. The challenges of working with legacy taxonomic citation, computer matching algorithms, and making connections have brought us to today’s goal of making Sherborn available as open linked data. The goal is to allow repurposing of data, partnering with others to allow machine-to-machine communications and sharing information for broad discovery and access.
Craig Churchill presents the main trends of microinsurance, give some example of innovations in the sector and highlight the common mistakes the different players make when starting a scheme.
Chris Lyal - Taxonomy and the Web - integrating the piecesICZN
More and more calls for information about species
What is this?
What species live in my country / national park?
What species are eating my crops?
What happens to them if I manage the environment?
Nigel J. Robinson - ZooBank and Zoological Record - a partnership for successICZN
Since its origin in 1864, ZR has had a close association with the taxonomic community, particularly with the Zoological Society of London. ZR was founded in 1864 by a group of scientists associated with the British Museum. It continued, supported by Society until 1980 when a partner was sought and BIOSIS took over production activities. In 2004, BIOSIS realised that with limited resources we could not achieve our aims and put our ideas into practice without further partnerships, so in January 2004, BIOSIS (including ZR) was acquired by the Thomson Corporation, and the new ownership is now starting to pay dividends. Over that 150 years or so, there have been difficult times, but ZR is still here and still has the same purpose it had in 1864 - to serve the community and disseminate taxonomic, biodiversity and zoological information for the benefit of scientific research.
This presentation discusses ZR, and the new free Index to Organism Names service which serves to demonstrate our commitment as Thomson to this initiative. I will also discuss how the partnership between ZR and ICZN might work from the ZR perspective.
Sherborn: Evenhuis - Charles Davies Sherborn and The Indexer’s ClubICZN
Charles Davies Sherborn was an indexer. And he followed a long line of indexers. And a longer line of indexers followed him. They/we are all members of “The Indexer’s Club”. A club of obsessed individuals who, for some weird reason, find it necessary to not only facilitate a semblance of order, but to make sometimes incredibly huge amounts of information available to others [sacrificing their social lives and labouring on what spouses and colleagues may consider esoteric projects in order to save others from the same work]. And in doing so, encumbering most of the day and the wee hours of the night with a passion and fervour few other human beings can even begin to understand. This presentation will explore the bits of Sherborn’s life that led to that passion for indexing; and touch upon the impact he has had on bibliographies and researching the dates of publication; upon nomenclature; and upon the indexing of names — and it will attempt to explain why he did this and where we all can go as a result.
Sherborn: Fautin & Alonso-Zarazaga - LANs: Lists of Available Names – a new g...ICZN
Article 79 of the ICZN Code, which appeared first in the Fourth Edition, outlines a procedure for adding large numbers of names to the List of Available Names simultaneously, as a Part of the List. This feature has gained importance with the development of Zoobank, because the LAN can be an important adjunct to or component of Zoobank. Article 79 describes a deliberative process, detailing steps for submission and for consideration by the public and Commission, and their chronology: submission must be by “an international body of zoologists,” and the proposed Part must be available for “comments by zoologists” for 12 months, followed by another 12-month period for comments on the proposed Part as revised in light of comments received. However, Article 79 it is mute about the contents of the submission. It is clear that adding a Part to the List will prevent long-forgotten names from displacing accepted ones – thus, for taxa on the List under the provisions of Article 79, nomenclatural archeology will not be worthwhile. Beyond that, Commissioners who participated in writing the Fourth Edition are divided about the intent of Article 79: some aver it is intended to document every available name within the scope of the Part, others it is to pare the inventory of names within the scope of the Part. The comprehensiveness of the names in the Part is critical because, according to Article 79.4.3, “No unlisted name within the scope (taxonomic field, ranks, and time period covered) of an adopted Part of the List of Available Names in Zoology has any status in zoological nomenclature despite any previous availability” (names may subsequently be added only “in exceptional circumstances,” according to Article 79.6). Under the first interpretation, the Part functions as a strictly nomenclatural archive. Under the second interpretation, the Part pares away nomina dubia, so Parts of the List resulting from actions under Article 79 are like the Approved Lists of Bacterial Names that took effect on 1 January 1980 – taxonomically recognizable as well as nomenclaturally available. It is critical that a consistent basis for implementing Article 79 be adopted; it is unrealistic to expect unanimity, given the diversity of opinion among those who helped craft Article 79.
Sherborn: Scholz - BHL-Europe: Tools and Services for Legacy Taxonomic Litera...ICZN
Literature research is the base for the scientific work of taxonomists. Therefore, large and well-curated natural history libraries are a very important prerequisite to carry out scientific projects efficiently. The library work, however, has several serious limitations that slow down the work significantly. The natural history library corpus is highly fragmented and scattered. In particular much of the early published literature is rare or is only available in a very few libraries. A lot of time and effort is involved to find and collect all scientific works that are necessary for a specific project.
Today, quick and easy access to digital literature is more and more important to facilitate scientific work. Over the last few years a large number of library resources for taxonomists have been made available online. Since 2007, the Biodiversity Heritage Library (BHL) project is digitising the biodiversity literature holdings of numerous libraries in the UK and USA and making them available on the internet.
Since 2009, the eContentplus project Biodiversity Heritage Library for Europe (BHL-Europe) is developing four different access routes to the biodiversity literature digitised by many European and global partners over the last years. With the Global References Index to Biodiversity (GRIB, http://grib.gbv.de/), BHL-Europe provides in collaboration with the EDIT project a union catalogue of library holdings of many European and US libraries. This will facilitate the search for literature, either digitised or not. This tool will also facilitate the management of digitisation projects all over the world and collect scan request from the scientific community. For an effective access to already digitised literature, BHL-Europe is building a multilingual portal for the scientific community. This portal will also have functionalities currently not available in the BHL portal. The BHL-Europe Portal will, for example, facilitate the search for common and scientific names of biological organisms as well as person names through the implementation of various webservices (e.g. Catalogue of Life, VIAF). The backbone of the portal is a preservation and archive system built on a customised storage infrastructure housed by the Natural History Museum in London. We are currently collecting digitised literature from 27 different content providers on our servers, including all the content that is currently available through the BHL portal (http://www.biodiversitylibrary.org). In order to serve also a broader audience, the digitised literature available by BHL-Europe is also accessible by Europeana, Europe's digital library, archive and museum (http://www.europeana.eu/).
To date, most digitisation of taxonomic literature has led to a more or less simple digital copy of a paper original – the output has effectively been an electronic copy of a traditional library. While this has increased accessibility of publications through internet access, for many scientific papers the means of indexing and locating them is much the same as with traditional libraries. OCR and born-digital papers allow use of web search engines to locate instances of taxon names and other terms, but OCR efficiency in recognising names is still relatively poor, people’s ability to use search engines effectively is mixed, and many papers cannot be directly searched. Instead of building digital analogues of traditional publications, we should consider what properties we require of future taxonomic information access. Ideally the content of each new digital publication should be accessible in the context of all previous published data, and the user able to retrieve nomenclatural, taxonomic and other data / information in the form required without having to scan all of the original paper and extract target content manually. This opens the door to dynamic linking of new content with extant systems – automatic population and updating of taxonomic catalogues, ZooBank and faunal lists, all descriptions of a taxon and its children instantly accessible with a single search, comparison of classifications used in different publications, and so on. The means to do this is currently marking up content into XML, the more atomised the mark-up the greater the possibilities for data retrieval and integration. Mark-up requires XML that accommodates the required content elements and is interoperable with other XML schemas, and there are now several written to do this, particularly TaxPub, taxonX and taXMLit, the last of these being the most atomised. Building on earlier systems for mark-up of legacy literature ViBRANT is developing a new workflow and seeking to increase the automated component of the process. Manual and automatic data and information retrieval is demonstrated by projects such as INOTAXA and Plazi. As we move to creating and using taxonomic products through the power of the internet, we need to ensure the output, while satisfying the requirements of the Code, is fit for purpose in the future.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images