Le logiciel open source (ou OSS) a pris une importance cruciale dans le monde entier. Pourtant, même si certains OSS respectent les bonnes pratiques de sécurité, d’autres ne le font pas, ce qui peut conduire à des vulnérabilités dangereuses en la matière. Le programme de badge des bonnes pratique de la CII (Core Infrastructure Initiative) a ainsi été créé dans le but de remédier à cette situation. Ce programme définit des critères de « bonne pratique » en matière de sécurité et de maintien en service, ainsi qu’un processus permettant de décerner aux projets OSS un badge attestant qu’ils respectent ces critères. Cette démarche a pour but d’inciter les projets à appliquer les bonnes pratiques et à aider les utilisateurs à identifier ceux qui les respectent.
Cette présentation abordera la situation actuelle du programme de badge. Elle précisera les principaux critères correspondant aux différents niveaux (basique, argent et or), les projets ayant obtenu des badges, les améliorations en termes de sécurité que les projets ont mené pour obtenir le badge, la prise en charge des diverses langues (français, allemand, etc.) ainsi que certaines pistes intéressantes que les projets ont suivies pour satisfaire aux critères. Nous verrons également l’évolution de la participation au fil des ans (actuellement, plus de 3800 projets participants). Enfin, la présentation abordera les liens entre le programme et le monde qui l’entoure, notamment son intégration à l’OpenSSF (Open Source Security Foundation) et l’impact potentiel du décret présidentiel américain sur la cybersécurité.
Ces 10 dernières années ont vu l’émergence d’un nouveau rôle, distinct de ceux d’utilisateur et de développeur : le fournisseur d’infrastructure. Il s’agit de la personne qui met une infrastructure programmable à disposition des autres rôles. Par infrastructure ouverte, on entend le fait de proposer à ce nouveau rôle des solutions open source permettant le déploiement d’une infrastructure à la bonne échelle. Avec le succès d’OpenStack et de Kubernetes, l’infrastructure ouverte est en plein boom. La souveraineté numérique est l’une des préoccupations qui va favoriser l’adoption future de l'infrastructure ouverte, en particulier en Europe. Que nous réservent les dix prochaines années en matière d’infrastructure ouverte ?
My web application in 20 minutes with Telosys Laurent Guérin
The document introduces Telosys, an open source code generation tool that allows developers to quickly generate code from models using templates. It aims to improve productivity, standardization, quality and simplicity over manual coding. The document demonstrates how to define a model and templates, and then generate Python web application code including entities, services, controllers and views using Telosys.
This session will demonstrate how to use the Zowe open source framework to extend modern devops tooling and practices to the mainframe and to enhance the mainframe developer experience. A follow-up to the overview session, the hosts will drill into the Zowe architecture while demoing key capabilities including the command line interface (CLI) and API Mediation Layer.
Organized by the Linux Foundation’s Open Mainframe Project, Zowe opens the mainframe to the next generation of talent. Join this interactive session to learn how to “un-silo” the mainframe to accelerate software delivery and drive true cross-platform applications.
This presentation is delivered as part of the Faculty training program at Kristu Jayanthi College, Bangalore. The intent was to help students build competency and contribute to open source projects. Also which will eventually help them to build professional career in open source connected domains.
This event was organized by the SODA Foundation and lots of fabulous speakers delivered the series. Thank you SODA!!!!
Open Source Investments in Mainframe Through the Next Generation - Showcasing...Open Mainframe Project
In it's 3rd year, the Open Mainframe Project continues to invest in the open source ecosystem on mainframe through it's summer internship program. This year's class focused on improving mainframe open source packaging and support of modern technologies such as Cloud Foundry and Kubernetes.
In this session, interns will present their work and experience in working in the internship program.
Open Source Licensing: Types, Strategies and ComplianceAll Things Open
Presented by: Jeff Luszcz, ZebraCatZebra
Presented at All Things Open 2020
Abstract: Open Source powers the world, but you need to do more than use it.
In this talk we will provide background on the most common types of open source licenses, business models, security issues and the processes required to help you remain secure and in compliance. We will discuss best practices, scanning tools, remediation, customer and partner expectations around OSS compliance and how to manage OSS during events such as a product release or M&A.
Cross-platform Mobile Development on Open SourceAll Things Open
This document provides an overview of cross-platform mobile development using open source tools. It discusses hybrid mobile frameworks like Apache Cordova that allow building mobile apps with web technologies that are deployed to native app stores. While early hybrid apps had performance issues, newer frameworks discussed like React Native and NativeScript claim to generate truly native apps with high code reuse across platforms using JavaScript. The document also covers adjacent native frameworks like Xamarin that compile to native apps from C# instead of web technologies. Overall it introduces a variety of open source options for cross-platform mobile development.
Open Mainframe Project's Zowe, the first-ever open source software framework, has announced its first active Long Term Support (LTS) release and updated Zowe Conformance Program. This webinar will explain the significance of LTS and the impact it will have on the Zowe Conformance Program, which will have new features and enhancements. Join this webinar to learn more about the Zowe LTS, the Zowe Conformance Program, and how to get involved and engaged in one of the most active open source communities!
Speakers include:
- Bruce Armstrong, Member of the Zowe Leadership Committee and IBM Z Offering Manager
- Peter Fandel, Member of the Zowe Leadership Committee and Senior Director, Product Management for Rock Software
- Rose Sakach, Zowe Onboarding Squad Scrum Master and Global Product Manager, Mainframe Division for Broadcom
Ces 10 dernières années ont vu l’émergence d’un nouveau rôle, distinct de ceux d’utilisateur et de développeur : le fournisseur d’infrastructure. Il s’agit de la personne qui met une infrastructure programmable à disposition des autres rôles. Par infrastructure ouverte, on entend le fait de proposer à ce nouveau rôle des solutions open source permettant le déploiement d’une infrastructure à la bonne échelle. Avec le succès d’OpenStack et de Kubernetes, l’infrastructure ouverte est en plein boom. La souveraineté numérique est l’une des préoccupations qui va favoriser l’adoption future de l'infrastructure ouverte, en particulier en Europe. Que nous réservent les dix prochaines années en matière d’infrastructure ouverte ?
My web application in 20 minutes with Telosys Laurent Guérin
The document introduces Telosys, an open source code generation tool that allows developers to quickly generate code from models using templates. It aims to improve productivity, standardization, quality and simplicity over manual coding. The document demonstrates how to define a model and templates, and then generate Python web application code including entities, services, controllers and views using Telosys.
This session will demonstrate how to use the Zowe open source framework to extend modern devops tooling and practices to the mainframe and to enhance the mainframe developer experience. A follow-up to the overview session, the hosts will drill into the Zowe architecture while demoing key capabilities including the command line interface (CLI) and API Mediation Layer.
Organized by the Linux Foundation’s Open Mainframe Project, Zowe opens the mainframe to the next generation of talent. Join this interactive session to learn how to “un-silo” the mainframe to accelerate software delivery and drive true cross-platform applications.
This presentation is delivered as part of the Faculty training program at Kristu Jayanthi College, Bangalore. The intent was to help students build competency and contribute to open source projects. Also which will eventually help them to build professional career in open source connected domains.
This event was organized by the SODA Foundation and lots of fabulous speakers delivered the series. Thank you SODA!!!!
Open Source Investments in Mainframe Through the Next Generation - Showcasing...Open Mainframe Project
In it's 3rd year, the Open Mainframe Project continues to invest in the open source ecosystem on mainframe through it's summer internship program. This year's class focused on improving mainframe open source packaging and support of modern technologies such as Cloud Foundry and Kubernetes.
In this session, interns will present their work and experience in working in the internship program.
Open Source Licensing: Types, Strategies and ComplianceAll Things Open
Presented by: Jeff Luszcz, ZebraCatZebra
Presented at All Things Open 2020
Abstract: Open Source powers the world, but you need to do more than use it.
In this talk we will provide background on the most common types of open source licenses, business models, security issues and the processes required to help you remain secure and in compliance. We will discuss best practices, scanning tools, remediation, customer and partner expectations around OSS compliance and how to manage OSS during events such as a product release or M&A.
Cross-platform Mobile Development on Open SourceAll Things Open
This document provides an overview of cross-platform mobile development using open source tools. It discusses hybrid mobile frameworks like Apache Cordova that allow building mobile apps with web technologies that are deployed to native app stores. While early hybrid apps had performance issues, newer frameworks discussed like React Native and NativeScript claim to generate truly native apps with high code reuse across platforms using JavaScript. The document also covers adjacent native frameworks like Xamarin that compile to native apps from C# instead of web technologies. Overall it introduces a variety of open source options for cross-platform mobile development.
Open Mainframe Project's Zowe, the first-ever open source software framework, has announced its first active Long Term Support (LTS) release and updated Zowe Conformance Program. This webinar will explain the significance of LTS and the impact it will have on the Zowe Conformance Program, which will have new features and enhancements. Join this webinar to learn more about the Zowe LTS, the Zowe Conformance Program, and how to get involved and engaged in one of the most active open source communities!
Speakers include:
- Bruce Armstrong, Member of the Zowe Leadership Committee and IBM Z Offering Manager
- Peter Fandel, Member of the Zowe Leadership Committee and Senior Director, Product Management for Rock Software
- Rose Sakach, Zowe Onboarding Squad Scrum Master and Global Product Manager, Mainframe Division for Broadcom
The document summarizes an agenda for an Open Mainframe Project event. It includes introductions of several mainframe-centric open source projects hosted by the Open Mainframe Project: Ambitus, Feilong, Polycephaly, Zorow, and Zowe. It provides overviews of the missions and benefits of each project. It also discusses the Zowe Conformance Program and how to get involved in the Open Mainframe Project community through various activities and events.
Whether you are a Zowe User, Contribor, Extender or simply interested in what's happening with Zowe - please join us for the launch of the Zowe Quarterly Update Webinar. This is the first in the series of webinars we plan to host each quarter. The webinar will include:
A focus topic / speaker
A brief Zowe update
Upcoming Community Events Overview
Interactive Polls
Join us on this webinar to learn how we are extending the Zowe ZSS (z/OS back-end) to facilitate building in-depth (cross-memory, privileged, system-level) mainframe products with little-to-no assembler code required.
The document discusses the challenges of implementing effective network segmentation across modern distributed systems. It outlines several common mechanisms used for segmentation, such as VPC networks, security groups, Docker networking, and eBPF/Calico policies. However, it notes that individually these approaches face issues with scalability, coordination, and potential for misconfiguration. The document advocates for a hierarchical approach to segmentation that enforces consistent policies across layers from IAM roles to security groups to individual networks or segments. It raises open questions around coordinating policy specification and management across the different available mechanisms.
Open Source on the Mainframe Mini-Summit 2019 - How Open Source is Modernizin...Open Mainframe Project
The open source movement has rapidly become the way code is being developed for today’s smart and agile businesses. This session will cover how an “open mainframe” is the perfect solution for deploying open source on an enterprise computing platform. You will learn how the open source community has gathered around the mainframe platform and how open source projects such as Zowe and Feilong are the starting point for open development. The session will also cover how the mainframe platform is a natural technology for Linux deployments, and how the mainframe community operates within the wider construct of the Linux Foundation.
SUSE and IBM have partnered for 20 years to bring Linux to mainframes. Key events include the first release of SUSE Linux on Z in 2000 and certifying SAP on SLES for Z in 2002. Recent developments include KVM support in 2017, crypto enhancements in 2019, and the release of SLES for Z/L1 15 which features security, Kubernetes, and cloud technologies. SUSE aims to run containers on Z using Kubernetes and optimize cloud platforms like Cloud Foundry for mainframes.
The document provides an overview of the Internet of Things (IoT). It discusses early IoT projects from 15 years ago that allowed remote control of devices. It outlines the hardware, networking, protocols, and software enablers that have made IoT possible. Examples of IoT products and devices are provided. Challenges facing IoT like sensing environments, connectivity, power, security, and maintenance are also summarized.
The .NET ecosystem has radically transformed over the past 10 years; in the distant past, Microsoft actively discouraged and dismissed the possibility and viability of OSS categorically. Now, everything is open source and Microsoft is one of the single biggest contributors of open source globally. That same trend is strongly reflected in the .NET community - large companies include banks, insurers, airlines, manufacturers, and health care giants all feel increasingly comfortable using OSS products in the core of applications that generate billions of dollars a year in capital.
In this talk, we're going to cover the scope of the sustainability crisis, how it may affect you, and how to help prevent it both as an OSS user or as a contributor.
Feilong is a Python toolkit for managing cloud resources in a LinuxONE environment. It allows developers to create plugins that interface with the REST API to perform tasks like managing guest images, networking, and disk volumes. Feilong is installed in a "Bring Your Own Linux" virtual machine and governed by the Open Mainframe Project. It was originally created by IBM in 2017 to function as a z/VM Cloud Connector and interface with the LinuxONE hypervisor to enable management of virtual machines and resources.
This document discusses convergence across the cloud native ecosystem and connections between communities. It highlights how connected communities can foster cross-community collaboration through various means like commons sites, briefings, code contributions, and mailing lists. Examples are given of relationships between individuals, projects, and corporations in the Kubernetes and OpenShift ecosystems. Key players like Red Hat, IBM, Uber, and Amadeus are discussed in terms of their involvement in projects like Kubernetes, OpenShift, Jaeger, OpenStack, and more. The importance of inter-corporate relationships and upstream engagement is also touched on.
The document discusses the growth and development of the Node.js community and project. It notes that the number of contributors has grown from 14 to 85 in a year and a half. It also outlines improvements made to stability, standards support, language features, debugging tools, and the goal of a new installer. Overall the document conveys that Node.js has expanded its community involvement while focusing on increasing stability, performance, and standardization.
QCon SF 2017 - Microservices: Service-Oriented DevelopmentAmbassador Labs
Conventional wisdom is that microservices is an architecture that is the spiritual successor to service-oriented architecture. While true, this myopic view of microservices ignores some of the profound workflow shifts in today’s microservices organizations.
The reality is that microservices is an architecture _and_ workflow. In this talk, we’ll introduce the workflow of service-oriented development. Rafael will talk about how the real goal of microservices is to break up a monolithic development workflow. We’ll show you how, by breaking up your workflow, you can build software that lets you move fast and make things.
Is Enterprise Java Still Relevant (JavaOne 2015 session)Ian Robinson
Soon after Java burst into the world in the 90s it started to gatecrash the parties of its enterprise computing seniors, whose initial amused response was -- You're Not On The List, You're Not Coming In. But EJBs turned heads in the 20th Century and when the Java Enterprise platform emerged, it started getting more invites until it was the party. Now Java EE is grown up with its own kids - EE7 is already two years old. How is it and the platform doing? The party is now in the cloud and the guest list includes many different language technologies and fast-moving open-source innovations. Is Enterprise Java still relevant here? And if it is, what does it need to keep doing or what does it need to change to stay on the VIP list?
Microservices and containers networking: Contiv, an industry leading open sou...Codemotion
Contiv provides a higher level of networking abstraction for microservices: it provides built-in service discovery and service routing for scale out services, working with schedulers like Docker Swarm, Kubernetes, Mesos and Nomad. We will see some code examples, basic use cases and an easy tutorial on the web.
It’s 2021. Why are we -still- rebooting for patches? A look at Live Patching.All Things Open
Presented by: Igor Seletskiy
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: IT Teams know the drill. New security bulletins, new issues, new patches to deploy. Schedule another maintenance operation and prepare for system downtime.
There is a better way to do things. Live patching has been around in the Linux Kernel for some time now, but adoption has not been ideal so far - either because of a lack of trust in the technology or just lack of awareness - or sysadmins just enjoy interrupting their workloads or users.
Live patching consists of two aspects. First, there has to be a mechanism for function redirection in the kernel. As in many things, the kernel actually provides three different subset of tools that provide this functionality - kprobes, fprobes and Livepatching. Secondly, Live Patching relies on a set of tools to generate the actual patches to deploy, replacing the old code with new one. This is arguably the most involved part: you need to fit your new code in the proper space, you can’t overwrite other unrelated code and you need to maintain compatibility with other functions. If you change your parameter list, for example, its game over - something will break in the worst possible way.
In this talk we’ll go over issues like Consistency model, patch generation, deployment mechanisms and identify situations that are ideal candidates for live patching instead of traditional patching operations.
Robust collaboration services with OSGi - Satya Maheshwarimfrancis
The document discusses how Adobe Connect, Adobe's web conferencing platform, uses the OSGi framework to create a modular architecture. This allows individual components like audio conferencing to be updated independently without disrupting the entire application. It also avoids single points of failure by distributing components across multiple OSGi bundles. The speaker describes how audio conferencing is implemented using different OSGi bundles for the telephony manager, adaptors for each conferencing service, and asynchronous communication between bundles using event handlers.
The document discusses various open source tools that can be used to build production-ready Kubernetes clusters, including tools for observability, automation, continuous integration, ingress, security, backup/restore, and policy enforcement. It analyzes the advantages and disadvantages of popular options for logs/metrics collection, GitOps, service meshes, ingress controllers, identity management, and backups. Key criteria for tool selection are that they are open source, tested/proven in projects, and have an active community.
1) The document discusses how DevOps practices like continuous integration, delivery, and deployment can help organizations innovate faster by getting code changes to production environments more quickly.
2) It provides examples of how some banks are transforming their development processes using Red Hat OpenShift to deploy microservices in seconds rather than months.
3) The document outlines the benefits of a continuous delivery pipeline that leverages tools like Jenkins to automatically build, test, and deploy application images to non-production and production environments with minimal manual approvals required.
This document provides an introduction to the Core Infrastructure Initiative (CII) Best Practices Badge. It discusses the motivation for developing the badge, which was to encourage open source projects to follow best practices that increase security and quality. The badge criteria focus on areas like change management, reporting, quality, security, and analysis. Projects can self-certify that they meet the criteria to receive a badge at no cost. Over 3,200 projects are now participating in the program. The badge levels of passing, silver, and gold require meeting additional criteria. The CII aims to identify best practices and encourage their adoption to improve trust in critical open source infrastructure.
Introduction to the CII Badge Programe, OW2con'16, Paris. OW2
The document describes the Core Infrastructure Initiative (CII) Best Practices Badge, which is a project that aims to encourage open source software projects to follow good security practices. It does this by identifying a set of best practices and criteria that projects can self-certify against using a web application. Projects that meet the criteria receive a badge. The document provides background on CII and describes the criteria categories and examples. It also discusses the current state of badge adoption, sample impacts it has had, and future plans. The goal is to incentivize and recognize open source projects that follow secure development practices.
The document summarizes an agenda for an Open Mainframe Project event. It includes introductions of several mainframe-centric open source projects hosted by the Open Mainframe Project: Ambitus, Feilong, Polycephaly, Zorow, and Zowe. It provides overviews of the missions and benefits of each project. It also discusses the Zowe Conformance Program and how to get involved in the Open Mainframe Project community through various activities and events.
Whether you are a Zowe User, Contribor, Extender or simply interested in what's happening with Zowe - please join us for the launch of the Zowe Quarterly Update Webinar. This is the first in the series of webinars we plan to host each quarter. The webinar will include:
A focus topic / speaker
A brief Zowe update
Upcoming Community Events Overview
Interactive Polls
Join us on this webinar to learn how we are extending the Zowe ZSS (z/OS back-end) to facilitate building in-depth (cross-memory, privileged, system-level) mainframe products with little-to-no assembler code required.
The document discusses the challenges of implementing effective network segmentation across modern distributed systems. It outlines several common mechanisms used for segmentation, such as VPC networks, security groups, Docker networking, and eBPF/Calico policies. However, it notes that individually these approaches face issues with scalability, coordination, and potential for misconfiguration. The document advocates for a hierarchical approach to segmentation that enforces consistent policies across layers from IAM roles to security groups to individual networks or segments. It raises open questions around coordinating policy specification and management across the different available mechanisms.
Open Source on the Mainframe Mini-Summit 2019 - How Open Source is Modernizin...Open Mainframe Project
The open source movement has rapidly become the way code is being developed for today’s smart and agile businesses. This session will cover how an “open mainframe” is the perfect solution for deploying open source on an enterprise computing platform. You will learn how the open source community has gathered around the mainframe platform and how open source projects such as Zowe and Feilong are the starting point for open development. The session will also cover how the mainframe platform is a natural technology for Linux deployments, and how the mainframe community operates within the wider construct of the Linux Foundation.
SUSE and IBM have partnered for 20 years to bring Linux to mainframes. Key events include the first release of SUSE Linux on Z in 2000 and certifying SAP on SLES for Z in 2002. Recent developments include KVM support in 2017, crypto enhancements in 2019, and the release of SLES for Z/L1 15 which features security, Kubernetes, and cloud technologies. SUSE aims to run containers on Z using Kubernetes and optimize cloud platforms like Cloud Foundry for mainframes.
The document provides an overview of the Internet of Things (IoT). It discusses early IoT projects from 15 years ago that allowed remote control of devices. It outlines the hardware, networking, protocols, and software enablers that have made IoT possible. Examples of IoT products and devices are provided. Challenges facing IoT like sensing environments, connectivity, power, security, and maintenance are also summarized.
The .NET ecosystem has radically transformed over the past 10 years; in the distant past, Microsoft actively discouraged and dismissed the possibility and viability of OSS categorically. Now, everything is open source and Microsoft is one of the single biggest contributors of open source globally. That same trend is strongly reflected in the .NET community - large companies include banks, insurers, airlines, manufacturers, and health care giants all feel increasingly comfortable using OSS products in the core of applications that generate billions of dollars a year in capital.
In this talk, we're going to cover the scope of the sustainability crisis, how it may affect you, and how to help prevent it both as an OSS user or as a contributor.
Feilong is a Python toolkit for managing cloud resources in a LinuxONE environment. It allows developers to create plugins that interface with the REST API to perform tasks like managing guest images, networking, and disk volumes. Feilong is installed in a "Bring Your Own Linux" virtual machine and governed by the Open Mainframe Project. It was originally created by IBM in 2017 to function as a z/VM Cloud Connector and interface with the LinuxONE hypervisor to enable management of virtual machines and resources.
This document discusses convergence across the cloud native ecosystem and connections between communities. It highlights how connected communities can foster cross-community collaboration through various means like commons sites, briefings, code contributions, and mailing lists. Examples are given of relationships between individuals, projects, and corporations in the Kubernetes and OpenShift ecosystems. Key players like Red Hat, IBM, Uber, and Amadeus are discussed in terms of their involvement in projects like Kubernetes, OpenShift, Jaeger, OpenStack, and more. The importance of inter-corporate relationships and upstream engagement is also touched on.
The document discusses the growth and development of the Node.js community and project. It notes that the number of contributors has grown from 14 to 85 in a year and a half. It also outlines improvements made to stability, standards support, language features, debugging tools, and the goal of a new installer. Overall the document conveys that Node.js has expanded its community involvement while focusing on increasing stability, performance, and standardization.
QCon SF 2017 - Microservices: Service-Oriented DevelopmentAmbassador Labs
Conventional wisdom is that microservices is an architecture that is the spiritual successor to service-oriented architecture. While true, this myopic view of microservices ignores some of the profound workflow shifts in today’s microservices organizations.
The reality is that microservices is an architecture _and_ workflow. In this talk, we’ll introduce the workflow of service-oriented development. Rafael will talk about how the real goal of microservices is to break up a monolithic development workflow. We’ll show you how, by breaking up your workflow, you can build software that lets you move fast and make things.
Is Enterprise Java Still Relevant (JavaOne 2015 session)Ian Robinson
Soon after Java burst into the world in the 90s it started to gatecrash the parties of its enterprise computing seniors, whose initial amused response was -- You're Not On The List, You're Not Coming In. But EJBs turned heads in the 20th Century and when the Java Enterprise platform emerged, it started getting more invites until it was the party. Now Java EE is grown up with its own kids - EE7 is already two years old. How is it and the platform doing? The party is now in the cloud and the guest list includes many different language technologies and fast-moving open-source innovations. Is Enterprise Java still relevant here? And if it is, what does it need to keep doing or what does it need to change to stay on the VIP list?
Microservices and containers networking: Contiv, an industry leading open sou...Codemotion
Contiv provides a higher level of networking abstraction for microservices: it provides built-in service discovery and service routing for scale out services, working with schedulers like Docker Swarm, Kubernetes, Mesos and Nomad. We will see some code examples, basic use cases and an easy tutorial on the web.
It’s 2021. Why are we -still- rebooting for patches? A look at Live Patching.All Things Open
Presented by: Igor Seletskiy
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: IT Teams know the drill. New security bulletins, new issues, new patches to deploy. Schedule another maintenance operation and prepare for system downtime.
There is a better way to do things. Live patching has been around in the Linux Kernel for some time now, but adoption has not been ideal so far - either because of a lack of trust in the technology or just lack of awareness - or sysadmins just enjoy interrupting their workloads or users.
Live patching consists of two aspects. First, there has to be a mechanism for function redirection in the kernel. As in many things, the kernel actually provides three different subset of tools that provide this functionality - kprobes, fprobes and Livepatching. Secondly, Live Patching relies on a set of tools to generate the actual patches to deploy, replacing the old code with new one. This is arguably the most involved part: you need to fit your new code in the proper space, you can’t overwrite other unrelated code and you need to maintain compatibility with other functions. If you change your parameter list, for example, its game over - something will break in the worst possible way.
In this talk we’ll go over issues like Consistency model, patch generation, deployment mechanisms and identify situations that are ideal candidates for live patching instead of traditional patching operations.
Robust collaboration services with OSGi - Satya Maheshwarimfrancis
The document discusses how Adobe Connect, Adobe's web conferencing platform, uses the OSGi framework to create a modular architecture. This allows individual components like audio conferencing to be updated independently without disrupting the entire application. It also avoids single points of failure by distributing components across multiple OSGi bundles. The speaker describes how audio conferencing is implemented using different OSGi bundles for the telephony manager, adaptors for each conferencing service, and asynchronous communication between bundles using event handlers.
The document discusses various open source tools that can be used to build production-ready Kubernetes clusters, including tools for observability, automation, continuous integration, ingress, security, backup/restore, and policy enforcement. It analyzes the advantages and disadvantages of popular options for logs/metrics collection, GitOps, service meshes, ingress controllers, identity management, and backups. Key criteria for tool selection are that they are open source, tested/proven in projects, and have an active community.
1) The document discusses how DevOps practices like continuous integration, delivery, and deployment can help organizations innovate faster by getting code changes to production environments more quickly.
2) It provides examples of how some banks are transforming their development processes using Red Hat OpenShift to deploy microservices in seconds rather than months.
3) The document outlines the benefits of a continuous delivery pipeline that leverages tools like Jenkins to automatically build, test, and deploy application images to non-production and production environments with minimal manual approvals required.
This document provides an introduction to the Core Infrastructure Initiative (CII) Best Practices Badge. It discusses the motivation for developing the badge, which was to encourage open source projects to follow best practices that increase security and quality. The badge criteria focus on areas like change management, reporting, quality, security, and analysis. Projects can self-certify that they meet the criteria to receive a badge at no cost. Over 3,200 projects are now participating in the program. The badge levels of passing, silver, and gold require meeting additional criteria. The CII aims to identify best practices and encourage their adoption to improve trust in critical open source infrastructure.
Introduction to the CII Badge Programe, OW2con'16, Paris. OW2
The document describes the Core Infrastructure Initiative (CII) Best Practices Badge, which is a project that aims to encourage open source software projects to follow good security practices. It does this by identifying a set of best practices and criteria that projects can self-certify against using a web application. Projects that meet the criteria receive a badge. The document provides background on CII and describes the criteria categories and examples. It also discusses the current state of badge adoption, sample impacts it has had, and future plans. The goal is to incentivize and recognize open source projects that follow secure development practices.
DevOps is a software development method that stresses communication and integration between developers and IT operations. It aims to allow for more frequent deployment of code changes through automation of the process from development to production. Key aspects of DevOps include continuous integration, delivery, and monitoring to achieve rapid release cycles and get feedback to improve the process.
DevOps (development & operations) is an endeavor software development express used to mean a type of agile connection amongst development & IT . V Cube is one of the best institute for DevOps training in Hyderabad, We offers the comprehensive and in-depth training in DevOps. DevOps is an endeavor software development express used to mean a type of agile connection amongst development & IT operations.
DevOps is an IT cultural revolution sweeping through today’s organizations that want to develop, design, test, and deploy software more quickly and effectively. DevOps training in Hyderabad will enable you to master key DevOps principles, tools, and technologies such as automated testing, Infrastructure as a Code, Continuous Integration/Delivery, and more.
Software development (Dev) and IT operations (Ops) are combined in DevOps (Ops). Its goal is to shorten the systems development life cycle and provide high-quality software delivery on a continuous basis. DevOps is an add-on to Agile software development; in fact, several aspects of DevOps came from the Agile methodology.
Academics and practitioners have not developed a universal definition for the term “DevOps” other than it being a cross-functional combination (and a portmanteau) of the terms and concepts for “development” and “operations.” DevOps is typically defined by three key principles: shared ownership, workflow automation, and rapid feedback.
DevOps is defined as “a set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality,” according to Len Bass, Ingo Weber, and Liming Zhu, three computer science researchers from the CSIRO and the Software Engineering Institute. The term is, however, used in a variety of contexts. DevOps is a combination of specific practices, culture change, and tools at its most successful.
Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function.
In some DevOps models, quality assurance and security teams may also become more tightly integrated with development and operations and throughout the application lifecycle. When security is the focus of everyone on a DevOps team, this is sometimes referred to as DevSecOps.
These teams use practices to automate processes that historically have been manual and slow. They use a technology stack and tooling which help them operate and evolve applications quickly and reliably. These tools also help engineers independently accomplish tasks (for example, deploying code or provisioning infrastructure) that normally would have required help from other teams, and this further increases a team’s velocity to know more about the DevOps.
What is DevOps And How It Is Useful In Real life.anilpmuvvala
DevOps (development & operations) is an endeavor software development express used to mean a type of agile connection amongst development & IT . V Cube is one of the best institute for DevOps training in Hyderabad, We offers the comprehensive and in-depth training in DevOps. DevOps is an endeavor software development express used to mean a type of agile connection amongst development & IT operations.
DevOps is an IT cultural revolution sweeping through today’s organizations that want to develop, design, test, and deploy software more quickly and effectively. DevOps training in Hyderabad will enable you to master key DevOps principles, tools, and technologies such as automated testing, Infrastructure as a Code, Continuous Integration/Delivery, and more.
Software development (Dev) and IT operations (Ops) are combined in DevOps (Ops). Its goal is to shorten the systems development life cycle and provide high-quality software delivery on a continuous basis. DevOps is an add-on to Agile software development; in fact, several aspects of DevOps came from the Agile methodology.
Academics and practitioners have not developed a universal definition for the term “DevOps” other than it being a cross-functional combination (and a portmanteau) of the terms and concepts for “development” and “operations.” DevOps is typically defined by three key principles: shared ownership, workflow automation, and rapid feedback.
DevOps is defined as “a set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality,” according to Len Bass, Ingo Weber, and Liming Zhu, three computer science researchers from the CSIRO and the Software Engineering Institute. The term is, however, used in a variety of contexts. DevOps is a combination of specific practices, culture change, and tools at its most successful.
Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function.
In some DevOps models, quality assurance and security teams may also become more tightly integrated with development and operations and throughout the application lifecycle. When security is the focus of everyone on a DevOps team, this is sometimes referred to as DevSecOps.
These teams use practices to automate processes that historically have been manual and slow. They use a technology stack and tooling which help them operate and evolve applications quickly and reliably. These tools also help engineers independently accomplish tasks (for example, deploying code or provisioning infrastructure) that normally would have required help from other teams, and this further increases a team’s velocity to know more about the Devops get your Devops training Now.
One of the challenges faced by many web development based projects is the integration of source code for multiple releases during parallel development. The task to build and test the multiple versions of source code can eat out the quality time and limit the efficiency of the development/QA team. The case study focuses to resolve the issues of extensive effort consumed in build and deployment process from multiple branches in source repository and aim at Identification of source code integration issues at the earliest stage. This can further be enhanced to limit the manual intervention by integration of build system with test automation tool.
The above can be achieved by using different CI tools (like Hudson/Bamboo/TeamCity/CruiseControl etc) for continuous build preparation and its integration with any test automation suite. The case study specifies the use of CI-Hudson tool for continuous integration using ANT tool for build preparation and further invoking the automation test suite developed using selenium. It also discusses the limitations and challenges of using such an integration system for testing a web based application deployed on Apache Tomcat server. It also details additional plugins available to enhance such an integration of multiple systems and what can be achieved using the above integration.
CNCF general introduction to beginners at openstack meetup Pune & Bangalore February 2018. Covers broadly the activities and structure of the Cloud Native Computing Foundation.
This document provides an introduction to key concepts in DevOps, including the cultural and technological aspects. It discusses why the traditional development and operations models were problematic, and how DevOps aims to address this by promoting collaboration and automation. The document outlines typical DevOps implementation plans and some of the common technologies used, such as virtualization, continuous integration/delivery pipelines, infrastructure as code. It also provides recommendations on paths forward for developers, architects and managers in adopting DevOps practices.
Forge.mil is a collaborative software development platform that aims to overcome siloed development, reduce duplication of effort, and enable cross-program sharing of software and services. It provides application lifecycle management services and tools for collaborative development within a shared, multi-tenant environment for Department of Defense programs and partners. Forge.mil has grown to support over 2700 software releases from various DoD projects across different services since its initial launch in 2009.
Fundamentals of Using Open Source Code to Build ProductsBrian Warner
(1) Using open source code can help companies save time and money by leveraging existing "heavy lifter" components rather than reinventing them. (2) Companies must balance using existing open source components with contributing back upstream to gain benefits like access to ongoing improvements and meeting license obligations. (3) Complying with open source licenses is important for companies distributing code and involves understanding obligations like including copyright and ensuring access to source code.
Cara Tepat Menjadi iOS Developer Expert - Gilang RamadhanDicodingEvent
Untuk memenangkan kompetisi pasar dan mengatasi mahalnya biaya akuisisi pengguna, diperlukan aplikasi iOS dengan performa tinggi yang siap saing dan pro pada retensi pengguna.
Seorang iOS Developer Expert adalah sosok yang mampu mewujudkan App paripurna tersebut. Minim bug, tinggi level keamanannya, dan kecilnya ukuran aplikasi adalah faktor yang harus ia pertimbangkan demi kenyamanan pengguna.
Sementara itu dari sisi kode, seorang iOS Developer Expert harus memastikan bahwa aplikasinya scalable alias tetap bekerja dengan baik saat produk berubah mengikuti kebutuhan bisnis. App tersebut juga harus robust alias dibangun dengan pondasi kode yang kuat.
Jika tidak, perusahaan akan terbebani biaya pengembangan yang tinggi karena sulitnya perbaikan akibat code smell, yakni kode yang berantakan atau berstruktur buruk.
Dalam IDCamp x Dicoding LIVE kali ini kita akan membahas hal-hal yang mesti seorang iOS Developer Expert pahami, yakni best-practice penerapan teknologi terbaru iOS Development yang industri perlukan, sesuai dengan kebutuhan bisnis.
Jika impianmu adalah menjadi iOS Developer Expert, kamu memerlukan insight ini agar kode menjadi lebih kokoh dan mudah untuk dikembangkan menyesuaikan dengan kebutuhan bisnis.
Poin-poin yang akan dibahas mencakup:
- Apa saja praktik terbaik dan keahlian yang wajib kamu miliki guna menjadi iOS Developer Expert? Kenapa penting?
- Bagaimana kelas Menjadi iOS Developer Expert di Dicoding dapat bantu kamu menjadi iOS Developer Expert?
Tony Bibbs presented on frameworks for PHP development. He discussed when frameworks should and should not be used, common risks of frameworks, and typical framework components like MVC, ORM, templates. His key recommendation was to only change one component of a framework at a time through incremental improvements.
Security is tough and is even tougher to do, in complex environments with lots of dependencies and monolithic architecture. With emergence of Microservice architecture, security has become a bit easier however it introduces its own set of security challenges. This talk will showcase how we can leverage DevSecOps techniques to secure APIs/Microservices using free and open source software. We will also discuss how emerging technologies like Docker, Kubernetes, Clair, ansible, consul, vault, etc., can be used to scale/strengthen the security program for free.
More details here - https://www.practical-devsecops.com/
This document provides an overview of DevOps delivery pipelines for beginners. It defines key concepts like source code repositories, build artifacts, environments, and the roles of continuous integration and continuous delivery. The core DevOps principle is an automated software delivery pipeline from code to production. This involves separate build and deploy stages. Common build steps include fetching code, testing, packaging artifacts, and publishing to repositories. Deployment typically includes retrieving artifacts, configuring environments, and validating deployments. Setting up roles, notifications, versioned scripts, and avoiding complex triggers are best practices for enjoying an automated DevOps pipeline.
The document discusses how the CIO can help deliver value through embracing new technologies and processes related to agile development, mobile, cloud, big data, and security. It provides examples of how IT is changing to focus on systems of engagement that are personalized, social, and analytics-driven. The document advocates involving information security early in the development process through representative in development teams and establishing security budgets at the start of projects to help improve organizational processes and security.
Micro Focus Software Delivery and Testing Jan De Coster Presentation on the Journey to DevOps in the recent Micro Focus #DevDay Copenhagen.
Micro Focus enables enterprise software organizations to build innovative software and accelerate application delivery to meet the needs of the business. Whatever the challenges and infrastructures, our core principle—of reusing what already works to minimize business risk while supporting modern software practices—has positioned our customers to be better prepared to support the digital transformation of the business.
Build, test and deliver innovative software faster with less risk.
April 2017.
The lessons I learned is that Open source quickly becomes the natural choice wherever commoditization is happening in the software stack. Thus we expect business-to-business open source, which is already a significant trend in recent history, to become an increasingly common form of open source collaboration. Companies who understand the ground rules of business-to-business open source will be better positioned to identify and take advantage of open source opportunities in the competitive spaces that they share with other companies.
So I will share why open strategy is import for the enterprise. And how to do contributions for the open source projects n today’s topic.
This document introduces DevOps, including how the DevOps movement took place originating from practices like continuous integration and infrastructure as code. It describes challenges of traditional software delivery approaches and how DevOps aims to reduce gaps through practices like integrating development and operations processes. DevOps benefits continuous delivery through collaboration between dev and ops and supporting tools. It promotes principles like system thinking, amplifying feedback loops, and continual learning.
DEVNET-1125 Partner Case Study - “Project Hybrid Engineer”Cisco DevNet
Programming and API knowledge are common themes across SDN and “Open”. As we focus more on software, we will see a proliferation of APIs and a need to understand programming. An effective _hybrid_ engineer tomorrow will have both solid networking skills as well as an understanding of programmatic concepts. Keeping these technology and industry transitions in mind, Cisco Americas Partners Organization (APO) kicked off “Project Hybrid Engineer” this summer for Cisco Partners SEs with a focus on enhancing hands-on network programmability knowledge. This session highlights some of the key initiatives underway where APO is taking its experiences and enabling key Cisco Partners workforce for Cisco's Network Programmability solutions early on in the lifecycle. If you are a Cisco Partner, come and learn how you can benefit from “Project Hybrid Engineer” and get your workforce ready for this key technology transition.
Similar to Badge des bonnes pratiques OpenSSF de la CII (20)
Depuis bientôt 20 ans, l’Adullact soutient les alternatives libres auprès des collectivités. Récemment, l’association souhaité mettre en place un nouveau service à destination de ses membres, base sur Nextcloud, en partenariat avec l’entreprise Arawa. L’objectif est de permettre aux collectivité de découvrir une solution libre, plutôt que d’aller confier ses données aux alternatives bien connues proposées par les GAFAM (Microsoft365, Google Doc, etc). L’idée n’est pas de concurrencer les entreprises du numérique libre mais de proposer une acculturation au libre, bénéfique à l’ensemble du secteur open source. Pascal Kuczynski, Délégué Général de l’Adullact et Philippe Hemmel, président d’Arawa, expliqueront la démarche, le fonctionnement multi-collectivités du service et proposeront une démonstration du service.
Aujourd’hui, quand on parle d’API, on pense en général aux API REST. Elles sont omniprésentes, utilisent des protocoles et des formats standard, reposent sur des bases solides...
Pour l’administrateur système, une API REST de type Redfish permet, par exemple, de se constituer aisément une interface de gestion hors bande multi-constructeurs.
Pour autant, dans certaines situations, des contraintes peuvent empêcher de recourir à une API REST. Notamment lorsque votre système n’est pas directement accessible via le protocole HTTP. Dans ce cas, vous pouvez bien sûr toujours utiliser une API, mais reposant sur d’autres standards, comme le vénérable protocole SMTP !
Dans notre cas pratique, un système d’atelier de formation à la demande, c’est un front-end Web qui gère l’enregistrement des utilisateurs afin d’exécuter les documents Jupyter Notebooks hébergés sur un back-end accueillant l’instance Jupyterhub ainsi que tous les systèmes d’accompagnement nécessaires à la réalisation des différents ateliers proposés (sur Redfish, Git, Rust comme affiché sur https://hackshack.hpedev.io/workshops et via le portail de démo HPE WW https://hpedemoportal.ext.hpe.com/)
Pour que tout ceci fonctionne parfaitement, nous avons utilisé une API SMTP, le front-end générant le contenu SMTP et le back-end utilisant procmail, des scripts et des playbooks Ansible pour gérer la configuration de l’environnement utilisateur. Une fois connecté à la plateforme, l’utilisateur peut accéder au contenu d’atelier qui lui est propre, tous les liens vers les autres systèmes étant disponibles pour effectuer les actions. Pourquoi SMTP ? Nos besoins étaient suffisamment limités pour éviter le développement d’une API REST complète (même si nous en avons également une pour le front-end), nous bénéficions gratuitement de l’aspect asynchrone de l’e-mail pour la gestion des demandes et c’est sympa d’utiliser les bonnes vieilles méthodes pour montrer aux jeunes ingénieurs qu’il existe plusieurs de faire ;-)
Ça vous tente ? Venez donc découvrir comment nous avons procédé et voir comment tout cela fonctionne, depuis le déploiement automatique de la plateforme jusqu’à l’exécution d’un atelier.
Votre équipe a finalement décidé de passer le projet sur lequel vous travaillez depuis 2 ans en open source et de le publier sur GitHub.com. C’est maintenant chose faite et il ne vous reste plus qu’à profiter et retirer tous les avantages de ce passage en open source, n’est-ce pas ?
Eh bien non !
D’après ce que je comprends, vous vous y êtes mal pris ! Et vous n’en retirerez probablement pas grand-chose :-(
Dans cette présentation, nous allons partager avec vous des bonnes pratiques retirées de projets internes et externes sur le sujet pour que vous puissez vous-aussi vraiment tirer profit de la migration en open source de votre code.
Différents aspects seront abordés, notamment :
- Quand passer en open source ?
- Quoi passer en open source ?
- Quelle licence privilégier et pourquoi ?
- Développement communautaire
- Bénéfices retirés si l’opération est bien menée
Un passage en open source bien mené génère à la fois de la satisfaction intellectuelle et des retombées pour l’entreprise.
Le mouvement du logiciel libre a commencé à se structurer au milieu des années 80 et on pourrait croire qu’après 35 ans d’évolution, il a trouvé un mode de fonctionnement optimal qui n’est plus remis en question de nos jours. Pourtant, il n’en est rien, bien au contraire. L’organisation des projets libres continue à évoluer, remettant en question les modèles établis, à la recherche d’une efficacité toujours accrue. Cela se perçoit notamment au travers de quatre évolutions : une gestion allégée des contributions sur le plan juridique avec l’adoption croissante du Developer Certificate of Origin (DCO), une popularité croissante des licences permissives, un recul des modèles de gouvernance historiques (méritocratie et dictateur bienveillant à vie) au profit d’un modèle démocratique et l' émergence de nouvelles fondations incarnant ces changements.
Cette conférence vous présentera ces quatre évolutions et leurs motivations.
Data in Motion : un enjeu pour la modernisation des systèmes d'informationOpen Source Experience
Que ce soit au travers des projets d'Open Data, d'Open Banking ou plus simplement d'échange d'informations entre acteurs d'un même domaine d'activité (santé, assurances, transports ...), la médiation de données est un sujet clé pour toutes les entreprises et organisations. Le cloud peut faciliter ces échanges, mais le point essentiel reste la mise en œuvre et la maîtrise des flux : les pipelines de la donnée.
Le parcours de AAA Data, association devenue entreprise leader dans la donnée automobile, illustre comment l'ouverture d'un "gisement de données" (datalake), peut irriguer tout un secteur et contribuer à faire naître de nouveaux services.
GitOps est une combinaison de bonnes pratiques pour automatiser le déploiement de conteneurs et d'infrastructures. Au lieu de manipuler des changements de configuration en les poussant activement, ici les systèmes synchronisent leurs états automatiquement par rapport à un référentiel contrôlé. À travers de ce workshop, vous découvrirez comment mettre en œuvre cette méthode pour gérer automatiquement des clusters Kubernetes avec l'outil Flux qui vient récemment de sortir en version 2.
Quelle est la valeur de l’open source ? Étude de l’UE sur l’impact de l’open ...Open Source Experience
OpenForum Europe et Fraunhofer ISI ont mené une étude ambitieuse pour la Commission européenne portant sur l’impact des logiciels et matériels open source sur l’indépendance technologique, la compétitivité et l’innovation dans l’UE. Cette étude permettra d’orienter les politiques européennes en matière d’open source pour les prochaines années, mais elle a aussi un intérêt pour les instances gouvernementales à l’échelle mondiale.
Notre étude indique que l’impact de l’open source sur l’économie européenne était de l’ordre de 65 à 95 milliards € en 2018 alors que pour cette même année, les pays et les société de l’UE ont réalisé des investissements conséquents dans l’open source, à hauteur de plus d’un milliard d’euros. Les produits de ces investissements sont disponibles pour être réutilisés dans les secteurs public et privé, ainsi que pour faire progresser le développement et l’innovation.
Lorsque l’on regarde les chiffres historiques, on voit clairement que l’open source a très fortement contribué à la croissance économique, mais s’il était soutenu par des politiques et des actions adaptées, il pourrait dynamiser bien plus encore l’économie. À titre d’exemple, si les contributions au code open source augmentaient de 10 % chaque année, l’Union européenne verrait son PIB croître de 70 milliards € et pourrait compter 1000 start-ups de plus dans le secteur des TIC.
Au cours de cette conférence, des représentants de Fraunhofer ISI et de l’OpenForum Europe partageront dans le détail les résultats de l’étude d’impact économique, des études de cas, une analyse des politiques et des recommandations en la matière.
Comment créer une application web en quelques minutes avec un générateur de code Open Source simple et utilisable avec tous types de langages ou frameworks.
Cette présentation sera essentiellement basée sur une démonstration (création d’un application web en Python).
L’épilepsie est une maladie neurologique qui touche 600 000 personnes en France. Près d’un tiers de ces patients sont dits “pharmaco-résistants”, c'est-à-dire qu'ils continuent à avoir des crises malgré les traitements médicamenteux. Ils disposent alors de très peu d'alternatives. Ces crises imprévisibles sont une source de stress et un risque d’accident au quotidien pour les malades. Par ailleurs c’est une maladie complexe, on dénombre plus de 80 épilepsies distinctes.
Aura est une association à but non lucratif ayant pour objectif l’amélioration de la qualité de vie des personnes atteintes d’épilepsie. Au travers d'une approche Open Science, le projet fédère une communauté multidisciplinaire ayant pour but de développer un patch thoracique connecté permettant de détecter les crises d'épilepsie et d'alerter le patient. Pour la personne épileptique, une meilleure connaissance de sa maladie est un premier pas pour retrouver de l’autonomie et de la tranquillité dans son quotidien.
Un élément clé du dispositif passe par la conception d’un algorithme personnalisé de détection des crises d’épilepsie. Cet algorithme s’appuie sur des méthodes statistiques par apprentissage automatique. Afin de relever ce défi technique, nous avons mis en place une chaîne de traitement automatisé reposant entièrement sur une stack Open Source (Airflow, MLFlow, InfluxDB, Grafana) et un ensemble de librairies open source dédiées au traitement du signal cardiaque. Nous vous présenterons comment cette approche encore trop marginale dans le milieu de la santé est le garant de la transparence, de la reproductibilité et de la robustesse de nos résultats ainsi que de la diffusion de nos travaux auprès de la communauté scientifique.
Pour répondre aux besoins de nos clients, nous avons mis en place une infrastructure Cloud multi-clients et multi-services basée uniquement sur des composants Open Source : OVirt, OKD (Kubernetes), awx, Jenkins, Prometheus, FreeIPA, Foreman...
Cette plate-forme, nommé W'Opla, nous a permis tout d'abord de créer une offre concurrente à Office 365 ou Google Apps : W'Sweet. Cette solution déployée sous forme de conteneurs permet d'instancier BlueMind, NextCloud, Rocket.Chat, Jitsi pour chaque organisation en bénéficiant de l'isolation des données et d'exécution des processus de Kubernetes.
Forts de cette expérience et de notre expertise dans la gestion des identités et des accès nous mettons en place une nouvelle offre "Identity as a Service" nommée W'IDaaS. Elle repose ell aussi sur des logiciels libres : OpenLDAP, LDAP Tool Box, LemonLDAP::NG, LSC, FusionDirectory. Chaque client bénéficie alors de son propre annuaire et portail d'authentification compatible SAML et OpenID Connect. C'est à notre connaissance la première solution IDaaS 100% libre et souveraine.
Vous êtes un particulier, une organisation ou une entreprise et vous avez créé un logiciel open source très intéressant. Les utilisateurs le téléchargent, vous avez des retours, des rapports de bugs et des demandes d’ajout et de modification de code. Il est alors temps de réfléchir à la façon de faire rentrer d’autres personnes dans votre projet.
Lors de cette conférence, nous verrons quelles sont les étapes importantes pour attirer et impliquer des développeurs et des utilisateurs en balaynt depuis les fondamentaux, comme le guide de contribution ou la documentation, jusqu’aux prémices d’une gouvernance neutre.
Il faut trouver des solutions pour faire face à la menace de plus en plus forte que font peser les attaques, à la pénurie de talents et aux défis de l’optimisation des coûts en matière de cybersécurité. La tendance actuelle consiste à s’appuyer sur l’automatisation et l’orchestration des opérations de sécurité.
Or l’automatisation des SecOps conduit à devoir gérer des alertes de sécurité en plus grand nombre. L’inconvénient de cette approche est qu’on est potentiellement confronté chaque jour à une foule de nouvelles alertes. Et avec des centaines de vulnérabilités de gravité critique ou élevée à gérer, c’est à un sapin de Nöel tout illuminé que ressemble le rapport de sécurité quotidien. Cela peut bel et bien conduire à l’épuisement des équipes ou, pire encore, à de mauvaises décisions en matière de gestion des vulnérabilités.
Bien évidemment, il n’est pas réaliste d’espérer corriger toutes les failles. Les chefs d’entreprise doivent donc définir une limite en accord avec les équipes de sécurité. La hiérarchisation est un facteur de réussite essentiel si l’on veut renforcer l’efficacité et continuer à assurer un service approprié et de haute qualité en matière de réponse aux incidents de sécurité et de gestion des vulnérabilités. Le score CVSS ne suffit pas. Quelles sont donc les métriques pertinentes ? Comment les mesurer ? Quelle décision prendre ? Comment analyser l’efficacité de ce processus et comment l’adapter ?
Cette conférence a pour but d’échanger des idées sur une méthodologie de gestion des vulnérabilités basée sur le risque à l’aide de solutions open source comme PatrowlHears.
Cette approche est rendue possible par un juste équilibre entre automatisation des SecOps (pour être informés des vulnérabilités, failles et autres menaces) et hiérarchisation basée sur les métriques de vulnérabilité, l’actualité des menaces et la criticité des ressources. Nous verrons également des exemples d’événements qui devraient nous conduire à envisager une redéfinition des priorités en matière de vulnérabilités.
AliceVision is a Photogrammetric Computer Vision Framework providing 3D Reconstruction and Camera Tracking algorithms. It allows creating a 3D textured model from the analysis of a set of unordered images of a static scene taken with any type of cameras, from professional cameras to smartphones.
Meshroom is the graphical user interface built around AliceVision. It has a nodal-based interface, with a default reconstruction pipeline that can be customized for specific acquisition systems or industrial workflows. The nodal pipeline is split into small tasks that can be computed on multiple machines in parallel on render farm. This nodal pipeline also allows the end user to customize the workflow for a specific acquisition setup or to add dedicated nodes to run any task from another script or piece of software.
Meshroom has been used since 2014 in digital environment creation for the Visual Effects industry and now in many other industries including manufacturing, medical, cultural heritage, tourism, archeology, biology, surveillance and 3D printing.
During this session, we will present the technology behind AliceVision, illustrated by some concrete examples of production pipelines built around it.
Check out our website : alicevision.org
Analyse de la composition logicielle à l’aide d’outils open sourceOpen Source Experience
Un des principes les plus répandus dans l'ingénierie est celui de "ne pas réinventer la roue" ; il est d'autant plus important et courant dans le domaine de l'informatique. Aujourd'hui, de plus en plus de projets se trouvent avec des
dépendances Open Source, mais avec la facilité d'utiliser une librairie maintenue par toute une communauté vient aussi
la responsabilité de s'assurer que cette librairie ne contient pas de failles de sécurité connues, et qu'elle est
compatible avec le reste du projet en termes de licences. Ainsi, cela nous mène à devoir faire une analyse SCA (Software
Composition Analysis), qui consiste principalement en deux parties : la production d'une SBOM (Software Bill Of
Materials) afin de détailler l'arbre des dépendances et les informations de licences de chaque logiciel utilisé dans le
projet, et aussi la production d'un rapport de vulnérabilités de ces dépendances, afin d'avertir les utilisateurs en ce
qui concerne les CVEs publiés pour un logiciel donné.
Chez AdaCore, nous avons décidé de faire cela avec deux projets Open Source : ScanCode Toolkit et VulnerableCode. Après
avoir examiné les leaders du marché, en recherchant une solution "plug-and-play" qui nécessiterait peu de maintenance,
nous avons trouvé que les équivalents Open Source sont, dans notre cas, plus adaptés et plus flexibles.
Dans cette présentation, je partagerai les résultats de cette analyse, et j'expliquerai comment nous mettons en œuvre
ces solutions en pratique.
E-commerce en 2021 : grandes tendances technologiques dans le développement d...Open Source Experience
L’avenir du développement de l’e-commerce en open source est influencé par l’innovation technologique et les tendances du secteur. Certaines ont d’ailleurs déjà pris une place importante et acquis de la notoriété.
Cette conférence va évoquer rapidement les types de solution e-commerce existants et aborder les 5 principales plateformes open source qui dominent le marché en 2021. Nous allons aussi étudier plus en détail les tendances technologiques actuelles et futures qui stimulent le développement de nouvelles fonctionnalités clés pour les prochaines versions des plateformes ici évoquées. Nous tenterons également d’en explorer l’impact sur l’industrie dans son ensemble.
La conférence est structurée de la façon suivante : présentation d’une tendance technologique accompagnée d’exemples illustrant comment chacune des plateformes d’e-commerce étudiées a choisi de l’interpréter et de la mettre en œuvre dans les nouvelles versions de son produit.
Découvrez CrowdSec, la plate-forme d’automatisation de la sécurité, gratuite et open source, reposant sur l'analyse comportementale et la réputation des adresses IP. Elle analyse le comportement des visiteurs, identifie les menaces et protège les services numériques contre tout type d’attaques. La solution permet aussi aux utilisateurs de se protéger mutuellement. Chaque fois qu'une IP est bloquée, tous les membres de la communauté en sont informés. Déjà utilisée dans plus de 110 pays sur 6 continents, la solution constitue une base de données d’IP en temps réel qui profitera aux particuliers, aux entreprises, aux institutions, etc.
Le développement des solutions embarquées et IoT grand public met en évidence un choix de systèmes d’exploitation dérivés de GNU/Linux (utilisant Yocto, Buildroot ou plus rarement des distributions classiques type Debian en utilisant des outils comme ELBE). Le système d’exploitation Android de Google (également basé sur un noyau Linux) est très présent sur ces domaines (décodeurs TV, multimédia, bornes interactives, applications d’infotainment dans l’automobile - Android Auto et Android Automotive OS). Durant cette conférence nous décrirons les avantages et les inconvénients de chaque solution (GNU/Linux ou Android) en fonction du projet envisagé et ce sur plusieurs critères :
- domaines d’application
- architecture et difficultés techniques
- outils de développement disponibles
- prise en compte de la sécurité
- gestion des licences
- contraintes commerciales (coût, certification/compatibilité)
- écosystème
- pérennité et tendances
A l'issue de la conférence nous tenterons de fournir un comparatif synthétique afin d'aider l'auditeur dans son choix.
Démystifier les architectures orientées événements avec Apache KafkaOpen Source Experience
Les architectures orientées événements (ou EDA) sont perçues comme des entités magiques qui transforment instantanément vos systèmes en systèmes « temps réel » ! MAIS, en y réfléchissant, ne sont-ils pas déjà « temps réel » ? Je veux dire par là qu’ajouter un article dans un panier est pratiquement instantané dans la plupart des boutiques en ligne.
En fait, une EDA résout un ensemble de problèmes totalement différents et, en faisant appel à Apache Kafka, nous allons suivre la voie de l’évolution (ou de la révolution).
Les microservices sont faciles à prendre en main, mais une fois que c’est fait, on butte toujours sur les mêmes problèmes : l’accès aux données, la cohérence et les échecs (cela vous parle ? ).
La solution ? Les modèles, les modèles, rien que les modèles… Vous avez déjà dû entendre parler des notions de « Event Notification », d’« Event-carried State Transfer » ou même d’« Event Sourcing », mais comment les utiliser pour résoudre vos problèmes ? Et, plus important, comment peut-on utiliser Apache Kafka pour tirer parti de ces modèles ?
C’est ce que allons découvrir.
Les plateformes d'hébergement et de partage de code open source (Github, Gitkab, BitBucket, etc.) constituent le pilier fondamental pour la dissémination et l'organisation des logiciels open source. Elles permettent de structurer le code source, d'orchestrer les contributions, d'organiser le versionnage, de gérer la communauté des contributeurs et de fournir une vitrine essentielle pour tous les projets open source.
Exposer son code source au public va dans la démarche du partage open source, mais elle peut exposer son auteur a divulguer des vulnérabilités plus ou moins facile a exploiter par un attaquant. L'une des vulnérabilités les plus courantes et plus faciles a exploiter consiste a laisser les secrets (clefs d'API, Mots de passes, tickets, informations confidentielles etc.) en clair dans le code ou dans l'historique de modification du code. Les conséquences de ce genre de vulnérabilités peuvent se révéler désastreuses pour les entreprises, les organisations et le citoyens. Le cas de Uber en 2016 qui a subi une fuite de données personnelles de 57 millions de leurs clients à cause d'un mot de passe non protégé sur Github en est un exemple flagrant.
Dans cette présentation nous allons discuter des raisons qui conduisent à engendrer ce genre de vulnérabilités, des moyens de s'en protéger et des différents outils Open Source permettant de scanner les projets et détecter en amont les risques de divulgation des secrets.
A tort ou à raison, Kubernetes s'est imposé, en l'espace de 3 ans, comme un des nouveaux standards dans la gestion des architectures microservices modernes.
Cependant, cet outil complexe amène son lot de pièges dans lesquels les débutants ne manqueront pas de tomber... Et ce n'est pas l'apparente facilité apportée par les offres Kubernetes as a Service qui va arranger les choses !
Après un rapide tour de l'actualité sécu autour de Kubernetes, je vous donnerai donc les clés pour éviter de vous faire voler (trop facilement) vos cycles CPU.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
UI5con 2024 - Keynote: Latest News about UI5 and it’s Ecosystem
Badge des bonnes pratiques OpenSSF de la CII
1. An Introduction to the
Core Infrastructure
Initiative (CII)
Best Practices Badge
David A. Wheeler, dwheeler AT linuxfoundation DOT org
Director of Open Source Supply Chain Security
Linux Foundation
2021-11-08
0
2. Heartbleed
In 2014, Heartbleed vulnerability found in OpenSSL
Highlighted OSS* projects don’t always follow widely
accepted practices, which results in avoidable problems
2
*OSS=Open source software. OSS is licensed to its users in a way that allows them to run the program for any purpose, study and
modify the program, and freely redistribute copies of either the original or modified program (without royalties to original author, etc.)
3. OSS project practices matter!
It is not true that “all OSS is insecure” … or that “all
OSS is secure”
It is not true that “all OSS is poor quality” … or that
“all OSS has excellent quality”
OSS has potential advantage (mass peer review)
OSS tends to be more secure & higher quality if the
project follows good practices
Good people necessary, but insufficient
Both creators & users of OSS want good results
What are those good practices?
How can we encourage projects to follow them?
How can anyone know if they’re being followed?
3
4. CII* Best Practices Badge
Identified best practices for OSS projects
For production of OSS**
Based on practices of well-run OSS projects
Increase likelihood of better quality & security
Criteria designed for any OSS project
Web application: OSS projects self-certify
If OSS project meets criteria, it gets a badge
No cost
Self-certification mitigated by automation, public
display of answers (for criticism), spot-checks, and
can be overridden if false
4
** for receiving OSS, esp. license compliance, see OpenChain
* CII = Core Infrastructure Initiative
5. Who created & runs the Badging Project?
Linux Foundation (LF)
“dedicated to building sustainable ecosystems around
open source projects to accelerate technology
development and industry adoption”
nonprofit mutual benefit corporation, 501(c)(6)
Linux kernel, JS Foundation, Cloud Native Computing
Foundation (CNCF), R Consortium, LF Energy, …
Core Infrastructure Initiative (CII) organized by LF
“to fund and support critical elements of the global
information infrastructure” & created badge project
Badging project is an OSS project created by CII
Yes, we earn our own badge
In 2020 transitioned to Linux Foundation’s Open
Source Security Foundation (OpenSSF)
CII no longer exists; may rename to OpenSSF in future
5
6. BadgeApp: Home page
6
To get your OSS project a badge, go to
https://bestpractices.coreinfrastructure.org/
8. CII badges are increasingly getting adopted!
8
Source: https://bestpractices.coreinfrastructure.org/project_stats as of 2021-11-08
All
projects
Projects
with non-
trivial
progress
Over 4,100 projects participating!
Over 640 passing!
General availability May 2016
9. Badge levels
Three badge levels (passing, silver, gold)
For higher levels, must meet previous level
Passing:
Captures what well-run projects typically already do
Not “they should do X, but no one does that”
66 criteria in 6 groups:
Basics, Change Control, Reporting, Quality, Security,
Analysis
Silver: Harder but possible for 1-person projects
Gold requires multiple developers
bus factor > 1*, 2-person review
9
Source: https://github.com/coreinfrastructure/best-practices-badge/
blob/master/doc/criteria.md
10. Badge criteria developed to be reasonable!
Relevant
Attainable by typical OSS projects (esp. passing)
Clear
Include security-related criteria (but not only those)
Consensus of developers & users
Criteria & web app developed as OSS project
Built on existing work, e.g., Karl Fogel’s Producing Open
Source Software
Not hypocritical
Our web app must get its own badge!
10
Worked with several projects, such as the
Linux kernel & curl, to test criteria validity
11. Non-requirements
Does NOT require any specific technology, product, or
service
Does NOT require or forbid any particular programming
language
Sometimes includes tips
Exception: Expect projects to have a web page with TLS
NEVER requires proprietary software or service
You may use or depend on it
Does NOT cost anything
Does NOT “take over your project”
Does NOT require doing everything immediately
Some projects have immediately earned a badge
Most projects try for a badge, find some things missing, &
gradually work to fix those issues
11
12. Sample passing badge criteria (yes, they’re
reasonable)
“The project website MUST succinctly describe what the software
does (what problem does it solve?).” [description_good]
“The project MUST use at least one automated test suite that is
publicly released as FLOSS (this test suite may be maintained as a
separate FLOSS project).” [test]
“At least one static code analysis tool MUST be applied to any
proposed major production release of the software before its
release, if there is at least one FLOSS tool that implements this
criterion in the selected language.” [static_analysis]
“The project sites (website, repository, and download URLs) MUST
support HTTPS using TLS.” [sites_https]
“The project MUST publish the process for reporting vulnerabilities
on the project site.” [vulnerability_report_process]
12
*FLOSS=Free/Libre/Open source software
Available in English, Chinese, French, German, Japanese, & Russian
Each criterion has a unique id; each id shown here in brackets
13. Badge scoring system
To obtain a badge, all:
MUST and MUST NOT criteria (42/66*) must be met
SHOULD (10/66*) met, OR unmet with justification
Users can see those justifications & decide if that’s enough
SUGGESTED (14/66*) considered (met or unmet)
People don’t like admitting they didn’t do something
In some cases, URL required in justification (to point
to evidence; 8/66* require this)
13
* For the passing badge
14. Miscellaneous info
Badging web application has automation
Automatically examines projects on creation/edits
Fills in some info & rejects obviously incorrect
Some larger organizations require badging
Open Network Automation Platform (ONAP)
Cloud Native Computing Foundation (CNCF)
graduation requirement
Supports easy display of badge info
GitHub-style badge for README
REST API & CORS for easy display of info
14
For details, see (ONAP) https://wiki.onap.org/display/DW/CII+Badging+Program
(CNCF graduation) https://www.cncf.io/projects/graduation-criteria/
(Dashboard example) https://landscape.cncf.io/selected=kubernetes
15. Sample improvements of CII badge process
OWASP ZAP (web app scanner)
Simon Bennetts: “[it] helped us improve ZAP quality… [it] helped
us focus on [areas] that needed most improvement.”
Change: Significantly improved automated testing
CommonMark (Markdown in PHP) changes:
TLS for the website (& links from repository to it)
Publishing the process for reporting vulnerabilities
JSON for Modern C++
“I really appreciate some formalized quality assurance which
even hobby projects can follow.”
Change: Added explicit mention of how to privately report errors
Change: Added a static analysis check to continuous integration
script
15
Source: https://github.com/coreinfrastructure/best-practices-badge/wiki/Impacts
16. Improvements to Home Edge project (LF Edge)
Improved documentation:
Security and Testing policy
How to Contribute Guide
Descriptions (of) External APIs
Improved the build and testing system
CI infrastructures: Github->Actions - 20 checks
Integration of external software tools for code analysis:
gofmt - 92%;
go_vet - 100%;
golint – 76%;
SonarCloud: Security Hotspots – 37 -> 0; Code Smells – 253 -> 50;
Duplications – 7.8% -> 2.3%
Improved security analysis:
Integrated CodeQL Analysis, LGTM services:
17 → 0 Security Alerts
Recommended that “the main thing is to start”
16
Source: LF Edge Technical Advisory Council (TAC) meeting, March 10, 2021,
https://wiki.lfedge.org/pages/viewpage.action?pageId=1671298
#TechnicalAdvisoryCouncil(TAC)-PreviousTACCalls-MeetingSlidesandRecordings
17. Conclusions
Involved in an OSS project? Get a badge!
Start here: https://bestpractices.coreinfrastructure.org
Don’t need to do “everything at once” – just start!
Questions? Email or create an issue
Prefer using OSS from projects using best practices
They are trying to “do the right thing”
You want to use OSS from projects like that!
CII best practices badge helps identify those projects
Criteria need additions/refinements? Translations?
Let us know, we’re also an OSS project
More info:
https://github.com/coreinfrastructure/best-practices-badge
https://github.com/coreinfrastructure/best-practices-
badge/wiki/Videos
17
Get or check on badges at:
https://bestpractices.coreinfrastructure.org
19. Many projects working towards silver & gold
19
Progress
to silver
Progress
to gold
Source: https://bestpractices.coreinfrastructure.org/project_stats?type=uncommon as of 2021-03-15
130 projects are halfway or better,
including 18 projects with silver
33 projects are halfway or better,
including 7 projects with gold
20. Some communities encouraging badges
Cloud Native Computing Foundation (CNCF)*
Maturity levels: Sandbox → incubating → graduated
For graduated level must “have achieved and
maintained a CII Best Practices Badge.”
Containerd graduated, has passing badge
R community discussing recommending badges
2018 survey:
90% believe badge will provide value to the R community’s
package developers or package users
77% saying it has benefit for both developers and users
74% would be willing to try it
Multiple R packages tried it out & began working
towards badges as part of discussion
DBI passing
Close to passing include ggplot2, covr, dodgr, netReg
20
Sources: CNCF Graduation Criteria v1.2
https://github.com/cncf/toc/blob/master/process/graduation_criteria.adoc
“Should R Consortium Recommend CII Best Practices Badge for R Packages: Latest Survey Results” https://www.r-
consortium.org/blog/2018/07/26/should-r-consortium-recommend-cii-best-practices-badge-for-r-packages-latest-survey-results
21. Remote access enabled
Can easily embed current badge image
<img src="https://bestpractices.coreinfrastructure.
org/projects/PROJECT_NUMBER/badge">
Easily shows current state on GitHub, etc.
REST API enables easy JSON data access
Including project database download for analysis
See https://github.com/coreinfrastructure/best-
practices-badge/blob/master/doc/api.md
Cross Origin Resource Sharing (CORS)
Enables data access from client-side JavaScript
E.g., for fancy client-side dashboards
21
23. Sample impacts of CII badge process (1 of 2)
OWASP ZAP (web app scanner)
Simon Bennetts: “[it] helped us improve ZAP quality… [it] helped us
focus on [areas] that needed most improvement.”
Change: Significantly improved automated testing
CommonMark (Markdown in PHP) changes:
TLS for the website (& links from repository to it)
Publishing the process for reporting vulnerabilities
OPNFV (open network functions virtualization)
Change: Replaced no-longer-secure crypto algorithms
JSON for Modern C++
“I really appreciate some formalized quality assurance which even
hobby projects can follow.”
Change: Added explicit mention how to privately report errors
Change: Added a static analysis check to continuous integration script
23
Source: https://github.com/coreinfrastructure/best-practices-badge/wiki/Impacts
24. Sample impacts of CII badge process (2 of 2)
BRL-CAD
Probably would have taken an hour uninterrupted, getting to 100%
passing was relatively easy
Website certificate didn’t match our domain, fixed
POCO C++ Libraries
“... thank you for setting up the best practices site. It was really helpful
for me in assessing the status…”
Updated the CONTRIBUTING.md file to include a statement on
reporting security issues
Updated the instructions for preparing a release in the Wiki to include
running clang-analyzer
Enabled HTTPS for the project website
GNU Make
HTTPS. Convinced Savannah to support HTTPS for repositories (it
supported HTTPS for project home pages)
24
Source: https://github.com/coreinfrastructure/best-practices-badge/wiki/Impacts
25. Sample clarifications
vulnerabilities_fixed_60_days (PR #1188)
“There MUST be no unpatched vulnerabilities of medium or high
severity that have been publicly known for more than 60 days.”
Added: “… this badge criterion, like other criteria, applies to the
individual project. Some projects are part of larger umbrella… An
individual project often cannot control the rest, but an individual
project can work to release a vulnerability patch in a timely way.”
hardened_site (PR #1187)
“The project website, repository (if accessible via the web), and
download site (if separate) MUST include key hardening
headers… [GitHub is known to meet this]”
Added: “Static web sites with no ability to log in via the web
pages may omit the CSP and X-XSS-Protection HTTP
hardening headers, because in that situation those headers are
less effective.”
25
26. Most common challenges for getting a badge
All projects 90%+ but not passing (2019-03-07)
265 projects. MUST with Unmet or “?” => Top 10 challenges:
26
# Criterion %miss Old rank#
1 vulnerability_report_process 21% 1
2 tests_are_added 17% 3
3 vulnerability_report_private 15% 4
4 know_secure_design 13% 9
5 vulnerabilities_fixed_60_days 13% 24
6 test_policy 13% 5
7 know_common_errors 13% 7
8 static_analysis 11% 8
9 static_analysis_fixed 11% 21
10 sites_https 9% 2
This data is as of
2019-03-07,
old rank from
2017-09-06
Analysis
Vulnerability
reporting
Tests
HTTPS
Know
secure
development
Mostly same challenges as 2017-09-06. HTTPS becoming less of a problem,
dropped from #2 to #10. Unclear why fixing things has become bigger problem..!
Fixing
27. Tests
Criteria
#1 The project MUST have evidence that such tests are being
added in the most recent major changes to the project.
[tests_are_added]
#4 The project MUST have a general policy (formal or not) that
as major new functionality is added, tests of that functionality
SHOULD be added to an automated test suite. [test_policy]
Automated testing is important
Quality, supports rapid change, supports updating dependencies
when vulnerability found
No coverage level required – just get started
27
28. Vulnerability reporting
Criteria
#2 “The project MUST publish the process for reporting
vulnerabilities on the project site.” [vulnerability_report_process]
#8 “If private vulnerability reports are supported, the project
MUST include how to send the information in a way that is kept
private.” [vulnerability_report_private]
Just tell people how to report!
In principle easy to do – but often omitted
Projects need to decide how
28
29. HTTPS
#3 “The project sites (website, repository, and download
URLs) MUST support HTTPS using TLS.” [sites_https]
Details:
You can get free certificates from Let's Encrypt.
Projects MAY implement this criterion using (for example)
GitHub pages, GitLab pages, or SourceForge project pages.
If you are using GitHub pages with custom domains, you MAY
use a content delivery network (CDN) as a proxy to support
HTTPS.
We’ve been encouraging hosting systems to support
HTTPS
29
30. Analysis
#5 “At least one static code analysis tool MUST be
applied to any proposed major production release of the
software before its release, if there is at least one
FLOSS tool that implements this criterion in the selected
language.” [static_analysis]
A static code analysis tool examines the software code (as
source code, intermediate code, or executable) without
executing it with specific inputs.
#6 “All medium and high severity exploitable
vulnerabilities discovered with dynamic code analysis
MUST be fixed in a timely way after they are confirmed.”
[dynamic_analysis_fixed]
Early versions didn’t allow “N/A”; this has been fixed.
30
31. Know secure development
Criteria
#8 “The project MUST have at least one primary developer who
knows how to design secure software.” [know_secure_design]
#9 “At least one of the primary developers MUST know of
common kinds of errors that lead to vulnerabilities in this kind of
software, as well as at least one method to counter or mitigate
each of them.” [know_common_errors]
Specific list of requirements given – doesn’t require
“know everything”
Perhaps need short “intro” course material?
31
32. Documentation
#10 “The project MUST include reference documentation that
describes its external interface (both input and output).”
[documentation_interface]
Some OSS projects have good documentation – but some do not
32
33. Application security: Using an assurance case
We want applications to be generally secure
However, security:
Can’t be directly measured (“how many kilograms”)
Is an emergent property (totality of components)
Is often a negative property (“never does X”)
How can you know “we’ve done enough”?
“Did long list of things” doesn’t provide confidence
How do you know those were the right things?
Must be able to justify & refine later
Must avoid breaking the bank
Useful approach: an “assurance case”
Starts with the overall goal
Repeatedly break the goal into smaller parts
Not complicated – keeps track of what needs to be done
Pattern we’ve used may be useful to you too!
33
34. Assurance case: Top level (figure 1)
34
Assets &
threat actors
identified &
addressed
System is adequately
secure against moderate
threats
Confidentiality Integrity Availability
Security implemented in
all software
development processes
Security requirements
identified and met by
functionality
Security implemented by
software life cycle processes
See next figure
Access
control
Identifi-
cation
Authenti-
cation
Authori-
zation
Fill in the more specific
requirements, then the
arguments of why they are
met (design, implementation,
verification,…) – but
avoid repetition
35. Assurance case: Next level (partial figure 2)
35
…
Not a waterfall-
These are
processes, not
phases
36. Life cycle technical processes (figure 2)
36
Verification:
many tools
Design:
Esp.
attack
model +
Saltzer
& Schr-
oeder
principle
s
37. Security in implementation (figure 3)
37
All
OWASP
top 10
(2013 &
2017)
countered
Entire most-
relevant security
guide applied
Hardening
applied
Hardened
outugoing HTTP
headers, including
restrictive CSP
Incoming
rate
limits
Force
HTTPS,
including
via HSTS
CSRF
token
harden-
ing
Outgoing
email
rate limit
1. Injection (incl.
SQL injection)
2. Auth &
session
3. XSS
4. Insecure
object references
5. Security
misconfiguration
6. Sensitive data
exposure
7. Missing
access control
8. CSRF
9. Known
vulnerabilities
10. Unvali-
dated
redirect/fwd
See securely reuse
(supply chain)
See security guide applied
Most implementation
vulnerabilities are due to
common types of
implementation errors or
common misconfigurations,
so countering them greatly
reduces security risks
Reduce/eliminate
impact if defect exists
All of the most
common important
implementation
vulnerability types
(weaknesses)
countered
All of the most common
known security-relevant
misconfiguration errors
countered
11. XXE (2017 A4)
12. Insecure
deserialization
(2017 A8)
13. Insufficient
logging and
,onitoring (2017
A10)
Encrypted
email
addresses
Cookie
limits
Securely
reuse
Review before use
Get authentic
version
Use package
manager
Security in
implementation
OWASP
Top 10 Web
hardening,
esp. CSP
Reuse/
Supply
chain
38. BadgeApp dependencies and security
Tiny amount of new code in our system…
Because almost all code is reused
Direct dependencies = 75 gems
Direct AND indirect dependencies = 197 gems
Plus OS, language runtime, RDBMS, etc.
Today a key security concern for most projects is
vulnerabilities through their dependencies
Minimize dependencies, ask them to minimize their run-time
dependencies, sanity check of direct dependencies
Package manager: Track what we have, trivially update
packages
Dependency tools*: detect & report packages with known
vulnerabilities (GitHub + bundle audit)
Thorough automated tests: enable quick update, test, & ship to
production (we have 100% coverage)
Other measures, esp. hardening (such as CSP), reduce risk in
meantime
38
* Origin analysis / software composition analysis tools
39. Got on Hacker News (HN)!
Badge-related post got on Hacker News front page on 2018-10-06
“Certainly not knocking on the badge or the practices…I just found it
amusing that PHP often gets a bad rap, but then shows up at the top of
the listed projects for objectively good development practices.” -
reindeerer
“I just found and read through the criteria list. It's mind-bogglingly
exhaustive, but in a very good way, and an excellent catalyst for
maintainable, secure software. I'd regard it as universally applicable
to any and all code.” – exikyut
“Lots of self-proclaimed ‘experts’ love to say ‘do X and Y and Z and you
will be successful because these are best practices’, but it's all a bunch
of snake oil… ‘Best practices are best not practiced.’” – userbinator,
dissenting, but then downvoted & replied to…
“Best practices are a bit like good genes. [They’re] by no means a
guarantee of success, fame, glory and riches, but damn if they don't
make things easier.” - reindeerer
“I see absolutely nothing dogmatic or cargo cult about the
recommendations they make. They are completely sensible, and a
decent guideline for improving the technical support infrastructure of a
project.” - throwaway2048
39
Source: https://news.ycombinator.com/item?id=18157494
40. Natural languages supported
English (en)
Chinese (Simplified) / 简体中文 (zh-CN)
French / Français (fr)
German / Deutsch (de)
Japanese / 日本語 (ja)
Russian / Русский (ru)
In progress:
Spanish (es), Swahili (sw), Brazilian Portuguese (pt-BR)
40
Even if you can’t understand the detailed justifications,
you can see the criteria & claimed answers
Our sincere thanks to
all our hard-working
translators!!
Help wanted!
41. Open source software
OSS: software licensed to users with these freedoms:
to run the program for any purpose,
to study and modify the program, and
to freely redistribute copies of either the original or modified
program (without royalties to original author, etc.)
Original term: “Free software” (confused with no-price)
Other synonyms: libre sw, free-libre sw, FOSS, FLOSS
Antonyms: proprietary software, closed software
Widely used; OSS #1 or #2 in many markets
“… plays a more critical role in the DoD than has generally been
recognized.” [MITRE 2003]
OSS almost always commercial by law & regulation
Software licensed to general public & has non-government use
commercial software (in US law, per 41 USC 403)
41
42. Statistics about the criteria themselves
42
Level Total
active
MUST SHOULD SUGG-
ESTED
Allow
N/A
Met
justifi-
cation or
URL
required
Includes
details
New at
this level
Passing 66 42 10 14 27 9 48 66
Silver 55 44 10 1 39 54 38 48
Gold 23 21 2 0 9 21 15 14
Source: https://bestpractices.coreinfrastructure.org/criteria
as of 2017-09-10
There are not a lot of gold criteria, but they’re challenging.
43. Passing criteria categories and examples (1)
1. Basics
The software MUST be released as FLOSS*. [floss_license]
It is SUGGESTED that any required license(s) be approved by
the Open Source Initiative (OSI). [floss_license_osi]
2. Change Control
The project MUST have a version-controlled source repository
that is publicly readable and has a URL. [repo_public]
Details: The URL MAY be the same as the project URL. The project
MAY use private (non-public) branches in specific cases while the
change is not publicly released (e.g., for fixing a vulnerability before
it is revealed to the public).
3. Reporting
The project MUST publish the process for reporting
vulnerabilities on the project site. [vulnerability_report_process]
43
*FLOSS=Free/Libre/Open Source Software
44. Passing criteria categories and examples (2)
4. Quality
If the software requires building for use, the project MUST
provide a working build system that can automatically rebuild
the software from source code. [build]
The project MUST have at least one automated test suite that
is publicly released as FLOSS (this test suite may be
maintained as a separate FLOSS project). [test]
The project MUST have a general policy (formal or not) that as
major new functionality is added, tests of that functionality
SHOULD be added to an automated test suite. [test_policy]
The project MUST enable one or more compiler warning flags,
a "safe" language mode, or use a separate "linter" tool to look
for code quality errors or common simple mistakes, if there is
at least one FLOSS tool that can implement this criterion in the
selected language. [warnings]
44
45. Passing criteria categories and examples (3)
5. Security
At least one of the primary developers MUST know of common
kinds of errors that lead to vulnerabilities in this kind of
software, as well as at least one method to counter or mitigate
each of them. [know_common_errors]
The project's cryptographic software MUST use only
cryptographic protocols and algorithms that are publicly
published and reviewed by experts. [crypto_published]
The project MUST use a delivery mechanism that counters
MITM attacks. Using https or ssh+scp is acceptable.
[delivery_mitm]
There MUST be no unpatched vulnerabilities of medium or
high severity that have been publicly known for more than 60
days. [vulnerabilities_fixed_60_days]
45
46. Passing criteria categories and examples (4)
6. Analysis
At least one static code analysis tool MUST be applied to any
proposed major production release of the software before its
release, if there is at least one FLOSS tool that implements this
criterion in the selected language… [static_analysis]
It is SUGGESTED that the {static code analysis} tool include
rules or approaches to look for common vulnerabilities in the
analyzed language or environment.
[static_analysis_common_vulnerabilities]
It is SUGGESTED that at least one dynamic analysis tool be
applied to any proposed major production release of the
software before its release. [dynamic_analysis]
46
47. Silver: Sample criteria (1 of 2)
The project MUST clearly define and document its project
governance model (the way it makes decisions, including key roles).
[governance]
The project MUST be able to continue with minimal interruption if
any one person is incapacitated or killed… [you] MAY do this by
providing keys in a lockbox and a will providing any needed legal
rights (e.g., for DNS names). [access_continuity]
The project MUST have FLOSS automated test suite(s) that provide
at least 80% statement coverage if there is at least one FLOSS tool
that can measure this criterion in the selected language.
[test_statement_coverage80]
The project MUST automatically enforce its selected coding style(s)
if there is at least one FLOSS tool that can do so in the selected
language(s). [coding_standards_enforced]
The project MUST implement secure design principles (from
"know_secure_design"), where applicable…
[implement_secure_design] 47
48. Silver: Sample criteria (2 of 2)
The project results MUST check all inputs from potentially untrusted
sources to ensure they are valid (a whitelist), and reject invalid
inputs, if there are any restrictions on the data at all.
[input_validation]
The project MUST cryptographically sign releases of the project
results intended for widespread use, and there MUST be a
documented process explaining [how to] obtain the public signing
keys and verify the signature(s)… [signed_releases]
The project MUST provide an assurance case that justifies why its
security requirements are met. [It MUST…] [assurance_case]
The project MUST use at least one static analysis tool … to look for
common vulnerabilities… , if there is at least one FLOSS tool that
can… [static_analysis_common_vulnerabilities]
Projects MUST monitor or periodically check their external
dependencies (including convenience copies) to detect known
vulnerabilities, and fix exploitable vulnerabilities or verify them as
unexploitable. [dependency_monitoring] 48
49. Gold: Sample criteria
The project MUST require two-factor authentication (2FA) for
developers for changing a central repository or accessing sensitive
data (such as private vulnerability reports)… [require_2FA]
The project MUST have at least 50% of all proposed modifications
reviewed before release by a person other than the author…
[two_person_review]
The project MUST have a "bus factor" of 2 or more. [bus_factor]
The project MUST have a reproducible build… [build_reproducible]
The project MUST apply at least one dynamic analysis tool to any
proposed major production release of the software before its release.
[dynamic_analysis]
The project MUST have performed a security review within the last 5
years. This review MUST consider the security requirements and
security boundary. [security_review]
Hardening mechanisms MUST be used in the software produced by the
project so that software defects are less likely to result in security
vulnerabilities. [hardening]
49
50. Key URLs
CII best practices badge (get a badge):
https://bestpractices.coreinfrastructure.org/
CII best practices badge project:
https://github.com/coreinfrastructure/best-practices-
badge
50
My thanks to the many who reviewed or helped develop the badging criteria and/or the software to implement it. This includes:
Mark Atwood, Tod Beardsley, Doug Birdwell, Alton(ius) Blom, Hanno Böck, enos-dandrea, Jason Dossett, David Drysdale,
Karl Fogel, Alex Jordan (strugee), Sam Khakimov, Greg Kroah-Hartman, Dan Kohn, Charles Neill (cneill), Mark Rader, Emily
Ratliff, Tom Ritter, Nicko van Someren, Daniel Stenberg (curl), Marcus Streets, Trevor Vaughan, Dale Visser, Florian Weimer
51. Involved in OSS?
If you lead an OSS project, what you do matters!
People depend on the software you create
The practices you apply affect the result
Secure or quality software is not an accident
Please try to get a badge, & show when you have it
If you’re considering using an OSS project
Check on the project – should you use it?
51
52. Release of presentation
This presentation is released under Creative Commons Attribution 3.0 or
later (CC-BY-3.0+)
Credits
Older versions were developed by the Institute for Defense Analyses (IDA);
thank you!
52
Editor's Notes
Hi, my name’s David A. Wheeler.
This presentation is an introduction to the Core Infrastructure Initiative (CII) Best Practices Badge. I hope to convince you that if you are part of an open source software project, you should try to get a best practices badge to help you identify and follow best practices. I also hope to convince you that if you use open source software, you should look for and prefer software that is following best practices, and that the badge can help you identify such projects. To get there, this presentation will explain the basics of the CII best practices badge. We’ll first start with a little history, which I think will help why the badge exists.
In 2014, the Heartbleed vulnerability was found in the OpenSSL cryptographic library. OpenSSL is widely used, so this vulnerability had a big impact. However, the bigger issue was that when people investigated the OpenSSL project itself, many didn’t like what they saw. At the time the OpenSSL project didn’t have a lot of support and failed to apply some widely accepted practices. Defects, including vulnerabilities, can happen to any project, but avoidable problems are something else.
==
Heartbleed logo is free to use, rights waived via CC0, per http://heartbleed.com/
In short, the practices used by an OSS project affect its users. It is NOT true that all OSS is insecure, or that all OSS is secure. Similarly, it is NOT true that all OSS is of poor quality, or that it all has excellent quality. Instead, OSS tends to be more secure and higher quality if the project follows good practices.
Practices aren’t enough, of course, because OSS projects need good people to develop the software. But good people aren’t enough. If the project doesn’t test the software that the project develops, or doesn’t use version control software, or doesn’t follow other widely accepted good practices, then many avoidable problems typically result. Both creators and users of open source software want good results, so it’d be helpful to encourage identifying those practices and encouraging their use. But what are those good practices? How can we encourage projects to follow them? And how can anyone know if those good practices being followed by some particular project?
This leads us to the CII best practices badge. We identified a set of best practices for producing OSS, based on the practices of well-run OSS projects. Each practice increases the likelihood of producing better quality or security. We then turned those practices into a simple set of criteria that can be applied to any OSS project. Some criteria also apply to proprietary software, but many don’t, because many criteria focus on enabling worldwide review and participation.
We also developed a web application that allows OSS projects to self-certify that they meet the criteria. If an OSS project meets the criteria, the project gets a corresponding badge. All of this is at no cost to the OSS projects. We chose self-certification because there are literally millions of OSS projects, and self-certification can scale to such sizes. Self-certification systems can have problems, so we countered those problems in a variety of ways. Perhaps the most important is that we automate the process; in a number of cases we automatically determine if a project meets a criterion. We also require that the answers be public, so that the public can judge the accuracy of the answers. We do spot-checks, and the answers can even be overridden if a project falsifies their answers. As a result, we believe we’ve developed an approach that scales yet provides good confidence in those answers.
The badging project was created by the Linux Foundation’s Core Infrastructure Initiative, abbreviated as CII. The Linux Foundation is a nonprofit mutual benefit corporation which already supports a wide variety of OSS projects that you probably use every day, such as the Linux kernel, JS foundation, cloud native computing foundation, and R consortium.
The badging project is itself an OSS project, and you’ll be glad to know that we earn our own badge.
If you can’t remember anything else from this presentation, please remember that if you participate in an OSS project, please go to https://bestpractices.coreinfrastructure.org and start the process of getting a badge. What you see here is a quick screenshot of our home page, you just click on the green button to get started. You can also click on the Projects link to see some of the other projects that have or are working on getting a badge.
Lots of OSS projects have earned a best practices badge. You’re probably using many of them now. Badge earners include the Linux kernel, Kubernetes, Node.js, and curl. The OpenSSL project has made a number of changes, and they’ve earned a badge too.
===
Most of the logos shown here have a trademark owned by their respective project. They’re shown here to help quickly identify them (and congratulate them!).
As you can see, since May 2016 when the CII Badging project became generally available we’ve had continuous growth in the number of participating projects and the number of projects that have earned a passing badge.
There are three badge levels: passing, silver, and gold. “Passing” captures what well-run projects typically already do. Silver is harder but is still possible for 1-person projects. Gold is even more difficult and includes criteria that require multiple developers.
It’s important to understand that these criteria were specifically developed to be reasonable. This slide briefly lists some of the questions we asked before adding any criterion. I’m not going to go into these points in detail, I just want to emphasize that we strived to create reasonable criteria. We also worked with a number of projects to develop and review the criteria, to make sure they would work in a variety of circumstances.
Perhaps most important is what we do not do. We do not require any particular technology, product, or service. For example, we do not require or forbid any particular programming language. We do include tips for some common circumstances, but those are simply suggestions to help people in those common circumstances. One exception is that we do expect projects to have a web page and use TLS to secure web pages, because this provides a widely-used standard and secure way to get basic information. We never require proprietary software or a proprietary service, though projects may choose to use them. Getting a badge doesn’t cost anything. We do not “take over your project” – we simply present the criteria, and your project can decide how to meet them or even if the project should meet them. Most importantly, we do not require that everything be done immediately. Some well-run projects have immediately earned a badge, but most projects find that they are missing a few things. That’s not a problem – just fill in the website form with your current state, and update it later as you resolve those issues.
Here are a few sample criteria. I’m just going to quickly read them to you, and hopefully you’ll agree that these are reasonable things for an OSS project to do. Note that the criteria use the term FLOSS instead of OSS, to try to include everyone who develops such software regardless of their motivations. Every criterion has a unique identifier; identifiers are shown here in square brackets.
Please let me just read them to you.
“The project website MUST succinctly describe what the software does (what problem does it solve?).” [description_good]
“The project MUST use at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project).” [test]
“At least one static code analysis tool MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language.” [static_analysis]
“The project sites (website, repository, and download URLs) MUST support HTTPS using TLS.” [sites_https]
“The project MUST publish the process for reporting vulnerabilities on the project site.” [vulnerability_report_process]
Again, I hope you’ll agree that these are reasonable things for an OSS project to do.
The text of these criteria are available in a variety of natural languages.
Of course, we had to implement a badge scoring system. To obtain a badge, all the MUST and MUST NOT criteria must be met. In addition, criteria that says SHOULD have to be met or they can be unmet if there is a justification for it. Users who review the badge answers can review those justifications and determine if they are enough. Note that MUST, MUST NOT, and SHOULD all have their usual IETF meanings.
Some criteria are merely SUGGESTED. SUGGESTED criteria are criteria where there are many reasons that they might not apply to a particular circumstance or might be excessively difficult. We have some SUGGESTED criteria because we believe that people don’t like admitting they didn’t do something if it’s obvious they should.
Some criteria specifically require URLs to point to evidence. Higher-level badges require more evidence.
Here is some miscellaneous information.
As I mentioned earlier, the badging web application automates some steps. When you create a project entry we try to automatically fill in information, and when a project entry is edited we reject information that is obviously incorrect. That makes the badging process simpler and more accurate.
Some larger organizations already require badges in some cases, including the open network automation platform and the cloud native computing foundation. This shows that at least some organizations think that the badge is worth getting.
The badge application makes it easy to display badge information, for example, projects on GitHub can easily modify their README to display their current badge status. We have a REST API and support Cross-Origin Resource Sharing, also known as CORS. The REST API and CORS to make it easy to get and display information for specialized needs, for example, to support specialized dashboards.
Many projects have said that getting a CII badge has been very helpful.
The OWASP ZAP project knew that they should have automated testing, but the desire to get a badge helped them turn their aspiration into a reality. The CommonMark project implemented HTTPS for their website and published how to report vulnerabilities to their project. The library JSON for modern C++ added information on how to privately report errors and added a static analysis check to their continuous integration script. These changes weren’t difficult, and the JSON for Modern C++ project said that they appreciated that these changes could be even done by hobby projects.
Are you involved in an OSS project? If you are, I strongly encourage you to try to get a badge for your project. Simply start at the badging website, https://bestpractices.coreinfrastructure.org. Don’t wait until you’re ready, simply get started and you can see what if anything is left to do. If you have questions, send us an email or create an issue, using a link at the bottom of every webpage.
If you’re looking at using some OSS, you should prefer to use OSS from projects that are applying best practices. Such projects are trying to do the right thing, and you want to use OSS from projects like that. It can be time-consuming to evaluate projects this way, so the CII best practices badge can help you identify such projects.
We’ve done our best to create good criteria, but nothing is perfect. If you think the criteria need additions or refinements, let us know. The best practices badge project is itself an OSS project, so we’d love to hear from you. If you want additional information, the URLs shown here should help.
The bottom line is: Get or check on best practices badges for OSS on https://bestpractices.coreinfrastructure.org.
Thank you for your time.
To determine what the “top 10” challenges are, I examined the projects that have at least 90% passing but not 100%, and sorted the MUST criteria that were “Unmet” or “?”. I didn’t include “SHOULD” or “SUGGESTED”, since those can be justified away with text. I skipped the “future” criterion crypto_certificate_verification_status, since it is not required.
The script “compute-criteria-stats” in the repository computed these.
Warning sign: https://openclipart.org/detail/104263/warning-sign
Beaker: https://openclipart.org/detail/272207/beaker-icon
Green tick: https://openclipart.org/detail/17014/greentick
Brain: https://openclipart.org/detail/140701/brain
All openclipart is released to the public domain (CC0), see: https://openclipart.org/share
Books: https://openclipart.org/detail/192515/stack-of-three-books
Unlock icon from http://www.iconsdb.com/red-icons/unlock-icon.html - This icon is provided by icons8 as Creative Commons Attribution-NoDerivs 3.0.
Physical LOC: Code 3,181; Test 2,831
https://codeclimate.com/blog/deciphering-ruby-code-metrics/