The document discusses upcoming developments in Node.js. It introduces Michael Dawson and Gibson Fahnestock, who lead various Node.js initiatives at IBM. They cover predictions for 2018, including improvements to diagnostics, security, and N-API support. Updates to the release process and various working groups are also outlined. Strategic initiatives like HTTP/2, ES modules, and async hooks are explored.
Continuous Delivery with Jenkins and Wildfly (2014)Tracy Kennedy
A presentation on a continuous delivery pipeline that leverages Jenkins Enterprise, Jenkins Operations Center, Nexus, HAProxy, and Wildfly. Pipeline components run in Docker containers along with SkyDock/SkyDNS for service discovery and NSEnter for command-line access to containers.
I have evidence that using git and GitHub for documentation and community doc techniques can give us 300 doc changes in a month. I’ve bet my career on these methods and I want to share with you.
PuppetConf 2016: A Tale of Two Hierarchies: Group Policy & Puppet – Matt Ston...Puppet
Here are the slides from Matt Stone's PuppetConf 2016 presentation called A Tale of Two Hierarchies: Group Policy & Puppet . Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Master Continuous Delivery with CloudBees Jenkins Platformdcjuengst
This document discusses the CloudBees Jenkins Platform for continuous delivery. It begins by outlining challenges that organizations face as their use of open source Jenkins grows. It then introduces the CloudBees Jenkins Platform as an enterprise-grade solution for Jenkins that provides features like high availability, security, scalability, and expert support. The document explores various components of the CloudBees Jenkins Platform, including CloudBees Jenkins Enterprise, support for cloud and containers, continuous delivery capabilities, and tools for monitoring and management at scale.
The Evolution of Glance API: On the Way From v1 to v3Brian Rosmaita
OpenStack Image Service (aka Glance) has been around from the earliest days of OpenStack and has been evolving ever since.
It's been three years since the last major update of its API - the v2 - went live with the Folsom release, and it is time now to move forward. With the recent introduction of new large features, such as Meta Definitions and Artifacts, the time has come to introduce a new version of Glance public API - V3.
In this session, Glance driver Brian Rosmaita and Artifacts driver Alexander Tivelkov will talk about the history of Glance API, the way it made since the initial release and the challenges it now has. The attendees will learn about the new experimental version of Glance API, the plans to deprecate the v1 and the new amazing features which are available for Glance users.
Configuration As Code - Adoption of the Job DSL Plugin at NetflixJustin Ryan
The Jenkins Job DSL plugin allows programmers to express job configurations as code. Learn about the benefits, from the obvious (store your configurations in the SCM of your choice) to the not-so-obvious (focus on intent, instead of succumbing to the distraction of multiple, complex job configuration options). We will share our experience adopting the plugin over the past year to create and maintain more complex job pipelines at Netflix.
This document discusses different types of continuous integration (CI) pipelines. It begins by describing staging CI, where jobs are triggered on new commits, and issues can arise if the build breaks. It then covers gating CI, used by OpenStack, where code is reviewed and tested before being merged without broken builds. Finally, it discusses doing CI yourself using open source tools like Gerrit, Zuul and Jenkins, alone or via the pre-built Software Factory project. The conclusion is that gating CI prevents broken masters and these techniques can be reused for one's own projects.
The document discusses upcoming developments in Node.js. It introduces Michael Dawson and Gibson Fahnestock, who lead various Node.js initiatives at IBM. They cover predictions for 2018, including improvements to diagnostics, security, and N-API support. Updates to the release process and various working groups are also outlined. Strategic initiatives like HTTP/2, ES modules, and async hooks are explored.
Continuous Delivery with Jenkins and Wildfly (2014)Tracy Kennedy
A presentation on a continuous delivery pipeline that leverages Jenkins Enterprise, Jenkins Operations Center, Nexus, HAProxy, and Wildfly. Pipeline components run in Docker containers along with SkyDock/SkyDNS for service discovery and NSEnter for command-line access to containers.
I have evidence that using git and GitHub for documentation and community doc techniques can give us 300 doc changes in a month. I’ve bet my career on these methods and I want to share with you.
PuppetConf 2016: A Tale of Two Hierarchies: Group Policy & Puppet – Matt Ston...Puppet
Here are the slides from Matt Stone's PuppetConf 2016 presentation called A Tale of Two Hierarchies: Group Policy & Puppet . Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Master Continuous Delivery with CloudBees Jenkins Platformdcjuengst
This document discusses the CloudBees Jenkins Platform for continuous delivery. It begins by outlining challenges that organizations face as their use of open source Jenkins grows. It then introduces the CloudBees Jenkins Platform as an enterprise-grade solution for Jenkins that provides features like high availability, security, scalability, and expert support. The document explores various components of the CloudBees Jenkins Platform, including CloudBees Jenkins Enterprise, support for cloud and containers, continuous delivery capabilities, and tools for monitoring and management at scale.
The Evolution of Glance API: On the Way From v1 to v3Brian Rosmaita
OpenStack Image Service (aka Glance) has been around from the earliest days of OpenStack and has been evolving ever since.
It's been three years since the last major update of its API - the v2 - went live with the Folsom release, and it is time now to move forward. With the recent introduction of new large features, such as Meta Definitions and Artifacts, the time has come to introduce a new version of Glance public API - V3.
In this session, Glance driver Brian Rosmaita and Artifacts driver Alexander Tivelkov will talk about the history of Glance API, the way it made since the initial release and the challenges it now has. The attendees will learn about the new experimental version of Glance API, the plans to deprecate the v1 and the new amazing features which are available for Glance users.
Configuration As Code - Adoption of the Job DSL Plugin at NetflixJustin Ryan
The Jenkins Job DSL plugin allows programmers to express job configurations as code. Learn about the benefits, from the obvious (store your configurations in the SCM of your choice) to the not-so-obvious (focus on intent, instead of succumbing to the distraction of multiple, complex job configuration options). We will share our experience adopting the plugin over the past year to create and maintain more complex job pipelines at Netflix.
This document discusses different types of continuous integration (CI) pipelines. It begins by describing staging CI, where jobs are triggered on new commits, and issues can arise if the build breaks. It then covers gating CI, used by OpenStack, where code is reviewed and tested before being merged without broken builds. Finally, it discusses doing CI yourself using open source tools like Gerrit, Zuul and Jenkins, alone or via the pre-built Software Factory project. The conclusion is that gating CI prevents broken masters and these techniques can be reused for one's own projects.
This document provides an agenda and updates for the JHipster Conf conference. It outlines the schedule of presentations for both English and French tracks, which will cover topics like JHipster collaboration, Open Collective, reactive programming, OAuth, Kotlin, and extending JHipster. It also summarizes what is new in JHipster 6, including upgrades to frameworks and default changes. The roadmap discusses moving to JDL-based configuration, improvements to JDL, Prettier for Java, supporting other backend technologies, improved Azure integration, and enhanced cloud features.
Deploying your Drupal site, Upgrading your Drupal Site, Scaling, Clustering and Monitoring it ... all topics Developers are often not involved with ...
Devops For Drupal explains the Devops problem, to a Drupal audience .
1) DevOps aims to bring developers and operations teams together to work more collaboratively. It promotes automation, measurement, and sharing between these teams.
2) Key aspects of DevOps include breaking down silos between teams, enabling better communication, automating processes like testing and deployment, and measuring metrics to improve performance.
3) DevOps is a cultural movement as much as a technical one - it focuses on building trust between teams with different skills who work towards a common goal.
.Net OSS Ci & CD with Jenkins - JUC ISRAEL 2013 Tikal Knowledge
This document discusses using Jenkins for continuous integration (CI) and continuous delivery (CD) of .NET open source projects. It covers how to achieve CI using Jenkins by automating builds, testing on each commit, and more. It also discusses using NuGet for dependency management and Sonar for code quality analysis. Finally, it provides examples of using Jenkins to deploy builds to platforms like AWS Elastic Beanstalk for CD after builds pass testing.
NuGet is evolving to be much more than a Visual Studio extension used only for ASP.NET applications. Come see how NuGet can be used cross-platform and in new scenarios.
Building and Deploying MediaSalsa, a drupal-based DAM as a ServiceJulien Pivotto
This document discusses the Mediasalsa digital asset management system and how Inuits builds and deploys it. Mediasalsa is a Drupal-based system that stores, transcodes, and manages metadata for assets. Inuits developed Mediasalsa as a service and handles the infrastructure, which includes separate servers for the backend, web servers, databases, Solr, and transcoding. Inuits emphasizes culture, automation, measurement, and sharing in their approach. They use tools like Puppet, Jenkins, Logstash, and Icinga to deploy and manage Mediasalsa in a continuous delivery model.
LASUG Online: Introduction to Docker and Docker ToolsVasiliy Fomichev
Docker is the fastest growing technology! Attendees
will be introduced to Docker containers and learn how to setup complex scaled xDB and Solr setups in
seconds. Docker is becoming more and more popular – Microsoft has already integrated containers into
Windows servers, and the release of a Windows OS kernel supporting containers is not far away. Join this session to learn how Docker can help in Sitecore development and system administration.
Introduction to Docker for Sitecore developers, sys admins, and managers. Docker history, use cases, use of Docker with Sitecore. Overview of Mongo and Solr on Docker with Sitecore. Shipping Sitecore code using Docker, Continuous Integration, and Immutable Infrastructure in today's CMS development. Docker makes DevOps a reality!
When Infrastructure becomes so reliable it's boring, apps can shine. Here's how to learn more about configuration management and application deployment for OpenStack clouds like Cisco Metacloud. Delivered at the OpenStack Summit in Barcelona, October 2016.
This document discusses Octopus Deploy, a deployment automation tool. It describes Octopus Deploy's architecture and 7 step deployment process. The process includes declaring environments, creating application packages, defining projects, creating deployment processes with steps and variables, releasing packages, and deploying releases to environments. Octopus Deploy supports features like automated deployments, rollbacks, configuration transformations, and integration with build pipelines. It provides visibility through audit logs and manages deployments across development, test, and production environments.
Provisioning environments. A simplistic approachEder Roger Souza
This document discusses provisioning development environments using DevOps tools like Vagrant and Puppet. It introduces Vagrant as a way to create reproducible and portable virtual environments. Puppet is then presented as an automation tool to configure and manage the resources and applications within those environments. The document provides a high-level example of using these tools together to provision a load-balanced web application backed by MongoDB. It aims to demonstrate how DevOps practices like automation and cooperation between development and operations can help address challenges like frequent deployment and onboarding new team members.
PuppetConf 2016: Keynote: Pulling the Strings to Containerize Your Life - Sco...Puppet
Scott Coulton is a Platform Engineering Lead at Autopilot who discusses how his company used Docker and Puppet to improve their CI/CD processes and speed up deployments to production while maintaining compliance. He explains how they had development teams deploy themselves by treating infrastructure as code that is automated, built, and tested. This allowed them to break down barriers and usher in a new wave of infrastructure development. Puppet was used for configuration management to containerize systems and help spread DevOps practices to other teams.
The document discusses the pros and cons of using Git. It acknowledges that Git is complex and unintuitive, but argues people will still use it because distributed version control encourages contributions and experimentation through easy branching. It also notes many popular open source projects use Git. The document then provides a basic overview of how to get started with Git configuration, cloning repositories, committing changes, branching, merging, and interacting with remote repositories on services like GitHub.
This document provides a brief introduction to Git, a distributed version control system. It describes what Git is and some of its key features, such as tracking changes to files over time, supporting distributed development, efficient object storage, easy branching and merging, and universal public identifiers. The document also discusses some of Git's internal mechanisms, such as SHA-1 hashes to uniquely identify objects, the index cache, and how commits and branches work.
The document discusses using the Grunt task runner to manage build and testing tools for Drupal projects. It introduces Grunt and explains how it can be used to build a Drupal site from a codebase, validate code quality, and test functionality with Behat. The presentation demonstrates setting up a sample project with Grunt Drupal Tasks and running commands to build, validate, and test the project. It encourages adopting these practices for consistent workflows and encourages contributing to the Grunt Drupal Tasks project.
The document provides an overview of Maven and JUnit topics including how to set up a Maven project, the core concepts of Maven such as dependencies and repositories, how to write JUnit tests for a calculator example including annotations like @Before and @After, how to measure test coverage using tools like Cobertura, and guidelines for best practices in unit testing. It also assigns homework to convert an AddressBook project to Maven, add unit tests, and configure Cobertura to measure 85% line coverage as well as generate a Maven site.
General introduction of Git and its feature set. Subversion migration strategies using git-svn, subgit or github enterprise. Suitable for different audience types managers, developers, etc.
SD DevOps Meet-up - Jenkins 2.0 and Pipeline-as-CodeBrian Dawson
This is a presentation given at the March 16th San Diego DevOps Meet-up covering some of the upcoming activities around Jenkin 2.0 and the Pipeline plugins which provide for Pipeline-as-Code and enable Jenkins with 1st class pipelines and stages.
A talk given to the San Francisco Jenkins Area Meetup (JAM) in January of 2016 on the current state of the Jenkins project and some ideas we're looking at for the future.
How we built an open video conferencing service to help people stay connected during corona
You can watch the Youtube Recording here (german):
https://t.co/cg7bGKDOjB?amp=1
This document provides an agenda and updates for the JHipster Conf conference. It outlines the schedule of presentations for both English and French tracks, which will cover topics like JHipster collaboration, Open Collective, reactive programming, OAuth, Kotlin, and extending JHipster. It also summarizes what is new in JHipster 6, including upgrades to frameworks and default changes. The roadmap discusses moving to JDL-based configuration, improvements to JDL, Prettier for Java, supporting other backend technologies, improved Azure integration, and enhanced cloud features.
Deploying your Drupal site, Upgrading your Drupal Site, Scaling, Clustering and Monitoring it ... all topics Developers are often not involved with ...
Devops For Drupal explains the Devops problem, to a Drupal audience .
1) DevOps aims to bring developers and operations teams together to work more collaboratively. It promotes automation, measurement, and sharing between these teams.
2) Key aspects of DevOps include breaking down silos between teams, enabling better communication, automating processes like testing and deployment, and measuring metrics to improve performance.
3) DevOps is a cultural movement as much as a technical one - it focuses on building trust between teams with different skills who work towards a common goal.
.Net OSS Ci & CD with Jenkins - JUC ISRAEL 2013 Tikal Knowledge
This document discusses using Jenkins for continuous integration (CI) and continuous delivery (CD) of .NET open source projects. It covers how to achieve CI using Jenkins by automating builds, testing on each commit, and more. It also discusses using NuGet for dependency management and Sonar for code quality analysis. Finally, it provides examples of using Jenkins to deploy builds to platforms like AWS Elastic Beanstalk for CD after builds pass testing.
NuGet is evolving to be much more than a Visual Studio extension used only for ASP.NET applications. Come see how NuGet can be used cross-platform and in new scenarios.
Building and Deploying MediaSalsa, a drupal-based DAM as a ServiceJulien Pivotto
This document discusses the Mediasalsa digital asset management system and how Inuits builds and deploys it. Mediasalsa is a Drupal-based system that stores, transcodes, and manages metadata for assets. Inuits developed Mediasalsa as a service and handles the infrastructure, which includes separate servers for the backend, web servers, databases, Solr, and transcoding. Inuits emphasizes culture, automation, measurement, and sharing in their approach. They use tools like Puppet, Jenkins, Logstash, and Icinga to deploy and manage Mediasalsa in a continuous delivery model.
LASUG Online: Introduction to Docker and Docker ToolsVasiliy Fomichev
Docker is the fastest growing technology! Attendees
will be introduced to Docker containers and learn how to setup complex scaled xDB and Solr setups in
seconds. Docker is becoming more and more popular – Microsoft has already integrated containers into
Windows servers, and the release of a Windows OS kernel supporting containers is not far away. Join this session to learn how Docker can help in Sitecore development and system administration.
Introduction to Docker for Sitecore developers, sys admins, and managers. Docker history, use cases, use of Docker with Sitecore. Overview of Mongo and Solr on Docker with Sitecore. Shipping Sitecore code using Docker, Continuous Integration, and Immutable Infrastructure in today's CMS development. Docker makes DevOps a reality!
When Infrastructure becomes so reliable it's boring, apps can shine. Here's how to learn more about configuration management and application deployment for OpenStack clouds like Cisco Metacloud. Delivered at the OpenStack Summit in Barcelona, October 2016.
This document discusses Octopus Deploy, a deployment automation tool. It describes Octopus Deploy's architecture and 7 step deployment process. The process includes declaring environments, creating application packages, defining projects, creating deployment processes with steps and variables, releasing packages, and deploying releases to environments. Octopus Deploy supports features like automated deployments, rollbacks, configuration transformations, and integration with build pipelines. It provides visibility through audit logs and manages deployments across development, test, and production environments.
Provisioning environments. A simplistic approachEder Roger Souza
This document discusses provisioning development environments using DevOps tools like Vagrant and Puppet. It introduces Vagrant as a way to create reproducible and portable virtual environments. Puppet is then presented as an automation tool to configure and manage the resources and applications within those environments. The document provides a high-level example of using these tools together to provision a load-balanced web application backed by MongoDB. It aims to demonstrate how DevOps practices like automation and cooperation between development and operations can help address challenges like frequent deployment and onboarding new team members.
PuppetConf 2016: Keynote: Pulling the Strings to Containerize Your Life - Sco...Puppet
Scott Coulton is a Platform Engineering Lead at Autopilot who discusses how his company used Docker and Puppet to improve their CI/CD processes and speed up deployments to production while maintaining compliance. He explains how they had development teams deploy themselves by treating infrastructure as code that is automated, built, and tested. This allowed them to break down barriers and usher in a new wave of infrastructure development. Puppet was used for configuration management to containerize systems and help spread DevOps practices to other teams.
The document discusses the pros and cons of using Git. It acknowledges that Git is complex and unintuitive, but argues people will still use it because distributed version control encourages contributions and experimentation through easy branching. It also notes many popular open source projects use Git. The document then provides a basic overview of how to get started with Git configuration, cloning repositories, committing changes, branching, merging, and interacting with remote repositories on services like GitHub.
This document provides a brief introduction to Git, a distributed version control system. It describes what Git is and some of its key features, such as tracking changes to files over time, supporting distributed development, efficient object storage, easy branching and merging, and universal public identifiers. The document also discusses some of Git's internal mechanisms, such as SHA-1 hashes to uniquely identify objects, the index cache, and how commits and branches work.
The document discusses using the Grunt task runner to manage build and testing tools for Drupal projects. It introduces Grunt and explains how it can be used to build a Drupal site from a codebase, validate code quality, and test functionality with Behat. The presentation demonstrates setting up a sample project with Grunt Drupal Tasks and running commands to build, validate, and test the project. It encourages adopting these practices for consistent workflows and encourages contributing to the Grunt Drupal Tasks project.
The document provides an overview of Maven and JUnit topics including how to set up a Maven project, the core concepts of Maven such as dependencies and repositories, how to write JUnit tests for a calculator example including annotations like @Before and @After, how to measure test coverage using tools like Cobertura, and guidelines for best practices in unit testing. It also assigns homework to convert an AddressBook project to Maven, add unit tests, and configure Cobertura to measure 85% line coverage as well as generate a Maven site.
General introduction of Git and its feature set. Subversion migration strategies using git-svn, subgit or github enterprise. Suitable for different audience types managers, developers, etc.
SD DevOps Meet-up - Jenkins 2.0 and Pipeline-as-CodeBrian Dawson
This is a presentation given at the March 16th San Diego DevOps Meet-up covering some of the upcoming activities around Jenkin 2.0 and the Pipeline plugins which provide for Pipeline-as-Code and enable Jenkins with 1st class pipelines and stages.
A talk given to the San Francisco Jenkins Area Meetup (JAM) in January of 2016 on the current state of the Jenkins project and some ideas we're looking at for the future.
How we built an open video conferencing service to help people stay connected during corona
You can watch the Youtube Recording here (german):
https://t.co/cg7bGKDOjB?amp=1
Stash – Taking Expedia to New Heights - David Williams and Christopher PepeAtlassian
Discover how making the move from Perforce to Git at Expedia lead to standing room-only training sessions abundant with high fives. The move to Git improved Expedia's software development with faster development cycles, deeper integrations, increased transparency, and a more unified development platform.
This document provides an overview and schedule for the OpenStack Documentation Boot Camp held in September 2013. The schedule outlines presentations on various documentation topics that will be given each day. It encourages participants to ask questions, try hands-on labs, and contribute discussion topics. It also thanks the event hosts. The goals are to increase OpenStack adoption, provide support, be strategic and collaborative, provide truthful information, and achieve business objectives.
This document discusses lessons learned from building and growing a software startup. It describes how the company quickly built their initial product but ran into scaling issues. It outlines the technical infrastructure changes they made to improve stability, such as moving to the cloud, adding Redis, Resque, and MongoDB. The document also provides recommendations on performance testing, libraries, tools, and localization. Overall it advocates for just starting to build the product now rather than overplanning.
This document discusses Sonian's contributions to open source projects like Fog, Elasticsearch, OpenStack Swift, and Chef. It also describes Sensu, an open source monitoring framework developed by Sonian. Sensu is designed for dynamic cloud environments using a messaging architecture with RabbitMQ and Redis. It allows reusing existing Nagios plugins and is intended to work with configuration management tools like Chef and Puppet. The document advocates adopting an open source community approach around Sensu to help test, develop plugins/modules, and provide documentation.
Dev ops lessons learned - Michael CollinsDevopsdays
The document discusses lessons learned from trying to implement DevOps in a rapidly growing company. Some key lessons include: (1) being able to clearly articulate what DevOps means for both individuals and the organization; (2) trusting developers and providing them with what they need; and (3) starting DevOps efforts with a focus on development environments rather than just production. The document also emphasizes focusing on toolchains rather than individual tools, using a service delivery pipeline approach, and ensuring good communication and hiring practices.
Everyone wants (someone else) to do it: writing documentation for open source...Jody Garnett
Many people will cite how their adoption of software was based on the quality of documentation, and yet documentation can be one of the largest gaps in quality with an open source project. This talk will discuss why that is, what you (yes you) can do about it, and how the author has so far managed to avoid burnout by learning to accept less-than-perfect grammar.
A FOSS4G 2015 Presentation
The document summarizes the development of Scripted, a lightweight browser-based code editor. It discusses observations that heavy IDEs are not ideal for JavaScript development and speed is essential. Two prototypes were created - Orion and Scripted. Scripted focused on speed, code awareness through static analysis, and module system comprehension. Near term goals include improved content assistance and a plugin model. Long term goals include debugging integration and support for additional languages.
August Webinar - Water Cooler Talks: A Look into a Developer's WorkbenchHoward Greenberg
The webinar covered tools and techniques used by several developers in their work with Domino and XPages. Howard Greenberg discussed using SourceTree and BitBucket for version control of XPages applications. Jesse Gallagher presented his toolchain including Eclipse, Maven, and Jenkins for plugin and application development. Serdar Basegmez outlined his development environment including configuring Eclipse to develop OSGi plugins for the Domino runtime. All emphasized the importance of source control, testing, and documentation in their processes.
This document provides a case study on a project created using open source technology. It discusses analyzing project goals and resources, evaluating open source options based on total cost of ownership, implementing a solution using LAMP stack, and lessons learned. The project was developed using Linux, Apache, MySQL, and PHP based on the needs of a low budget, ability to invest in internal skills, and reduce dependency on external trends. Key steps included preparing the Linux server, using version control and local testing, and engaging the open source community for support.
[DevDay 2017] ReactJS Hands on - Speaker: Binh Phan - Developer at mgm techno...DevDay Da Nang
A short description on ReactJS for absolute beginners. The presentation will walk you through why we should use React to develop web applications, as well as a live coding session where you can see it in action.
OSDC 2013 | Introduction into Chef by Andy HawkinsNETWAYS
This presentation will give an overview about what Chef is and how to access it. It will describe the typical use cases and architecture as well as Cookbooks, data bags and other concepts and will explain how to implement your CM solution. Finally it will show how to drive a successful Chef project.
Don't get blamed for your choices - Techorama 2019Hannes Lowette
As developers, we make choices all the time: architecture, frameworks, libraries, cloud providers, etc. And if you’ve been around for a while, you probably ended up regretting at least some of your choices.
In this session, we'll explore the typical pitfalls of making development choices and how to avoid them. By the end of this session, you will be armed to take any decision they will throw at you.
Now, if only there was a way to prove to your peers and superiors that you acquired this skill...
Well, there is! RAD Certification! I'll end my talk by telling you about this awesome certification program!
Continuous Integration with Cloud Foundry Concourse and Docker on OpenPOWERIndrajit Poddar
This document discusses continuous integration (CI) for open source software on OpenPOWER systems. It provides background on CI, OpenPOWER systems, and the Cloud Foundry platform. It then describes using the Concourse CI tool to continuously build a Concourse project from a GitHub repository. Key steps involve deploying OpenStack, setting up a Docker registry, installing BOSH and Concourse, defining a Concourse pipeline, and updating the pipeline to demonstrate the CI process in action. The document emphasizes the importance of CI for open source projects and how it benefits development on OpenPOWER systems.
Build software like a bag of marbles, not a castle of LEGO®Hannes Lowette
If you have ever played with LEGO®, you will know that adding, removing or changing features of a completed castle isn’t as easy as it seems. You will have to deconstruct large parts to get to where you want to be, to build it all up again afterwards. Unfortunately, our software is often built the same way. Wouldn’t it be better if our software behaved like a bag of marbles? So you can just add, remove or replace them at will?
Most of us have taken different approaches to building software: a big monolith, a collection of services, a bus architecture, etc. But whatever your large scale architecture is, at the granular level (a single service or host), you will probably still end up with tightly couple code. Adding functionality means making changes to every layer, service or component involved. It gets even harder if you want to enable or disable features for certain deployments: you’ll need to wrap code in feature flags, write custom DB migration scripts, etc. There has to be a better way!
So what if you think of functionality as loose feature assemblies? We can construct our code in such a way that adding a feature is as simple as adding the assembly to your deployment, and removing it is done by just deleting the file. We would open the door for so many scenarios!
In this talk, I will explain how to tackle the following parts of your application to achieve this goal: WebAPI, Entity Framework, Onion Architecture, IoC and database migrations. And most of all, when you would want to do this. Because… ‘it depends’.
Similar to How Build Infrastructure Powers the Node.js Foundation (20)
Transforming Product Development using OnePlan To Boost Efficiency and Innova...OnePlan Solutions
Ready to overcome challenges and drive innovation in your organization? Join us in our upcoming webinar where we discuss how to combat resource limitations, scope creep, and the difficulties of aligning your projects with strategic goals. Discover how OnePlan can revolutionize your product development processes, helping your team to innovate faster, manage resources more effectively, and deliver exceptional results.
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Manyata Tech Park Bangalore_ Infrastructure, Facilities and Morenarinav14
Located in the bustling city of Bangalore, Manyata Tech Park stands as one of India’s largest and most prominent tech parks, playing a pivotal role in shaping the city’s reputation as the Silicon Valley of India. Established to cater to the burgeoning IT and technology sectors
A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
Voxxed Days Trieste 2024 - Unleashing the Power of Vector Search and Semantic...Luigi Fugaro
Vector databases are redefining data handling, enabling semantic searches across text, images, and audio encoded as vectors.
Redis OM for Java simplifies this innovative approach, making it accessible even for those new to vector data.
This presentation explores the cutting-edge features of vector search and semantic caching in Java, highlighting the Redis OM library through a demonstration application.
Redis OM has evolved to embrace the transformative world of vector database technology, now supporting Redis vector search and seamless integration with OpenAI, Hugging Face, LangChain, and LlamaIndex. This talk highlights the latest advancements in Redis OM, focusing on how it simplifies the complex process of vector indexing, data modeling, and querying for AI-powered applications. We will explore the new capabilities of Redis OM, including intuitive vector search interfaces and semantic caching, which reduce the overhead of large language model (LLM) calls.
How GenAI Can Improve Supplier Performance Management.pdfZycus
Data Collection and Analysis with GenAI enables organizations to gather, analyze, and visualize vast amounts of supplier data, identifying key performance indicators and trends. Predictive analytics forecast future supplier performance, mitigating risks and seizing opportunities. Supplier segmentation allows for tailored management strategies, optimizing resource allocation. Automated scorecards and reporting provide real-time insights, enhancing transparency and tracking progress. Collaboration is fostered through GenAI-powered platforms, driving continuous improvement. NLP analyzes unstructured feedback, uncovering deeper insights into supplier relationships. Simulation and scenario planning tools anticipate supply chain disruptions, supporting informed decision-making. Integration with existing systems enhances data accuracy and consistency. McKinsey estimates GenAI could deliver $2.6 trillion to $4.4 trillion in economic benefits annually across industries, revolutionizing procurement processes and delivering significant ROI.
Orca: Nocode Graphical Editor for Container OrchestrationPedro J. Molina
Tool demo on CEDI/SISTEDES/JISBD2024 at A Coruña, Spain. 2024.06.18
"Orca: Nocode Graphical Editor for Container Orchestration"
by Pedro J. Molina PhD. from Metadev
Penify - Let AI do the Documentation, you write the Code.KrishnaveniMohan1
Penify automates the software documentation process for Git repositories. Every time a code modification is merged into "main", Penify uses a Large Language Model to generate documentation for the updated code. This automation covers multiple documentation layers, including InCode Documentation, API Documentation, Architectural Documentation, and PR documentation, each designed to improve different aspects of the development process. By taking over the entire documentation process, Penify tackles the common problem of documentation becoming outdated as the code evolves.
https://www.penify.dev/
Boost Your Savings with These Money Management AppsJhone kinadey
A money management app can transform your financial life by tracking expenses, creating budgets, and setting financial goals. These apps offer features like real-time expense tracking, bill reminders, and personalized insights to help you save and manage money effectively. With a user-friendly interface, they simplify financial planning, making it easier to stay on top of your finances and achieve long-term financial stability.
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio, Inc.
Alluxio Webinar
June. 18, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jianjian Xie (Staff Software Engineer, Alluxio)
As Trino users increasingly rely on cloud object storage for retrieving data, speed and cloud cost have become major challenges. The separation of compute and storage creates latency challenges when querying datasets; scanning data between storage and compute tiers becomes I/O bound. On the other hand, cloud API costs related to GET/LIST operations and cross-region data transfer add up quickly.
The newly introduced Trino file system cache by Alluxio aims to overcome the above challenges. In this session, Jianjian will dive into Trino data caching strategies, the latest test results, and discuss the multi-level caching architecture. This architecture makes Trino 10x faster for data lakes of any scale, from GB to EB.
What you will learn:
- Challenges relating to the speed and costs of running Trino in the cloud
- The new Trino file system cache feature overview, including the latest development status and test results
- A multi-level cache framework for maximized speed, including Trino file system cache and Alluxio distributed cache
- Real-world cases, including a large online payment firm and a top ridesharing company
- The future roadmap of Trino file system cache and Trino-Alluxio integration
What is Continuous Testing in DevOps - A Definitive Guide.pdfkalichargn70th171
Once an overlooked aspect, continuous testing has become indispensable for enterprises striving to accelerate application delivery and reduce business impacts. According to a Statista report, 31.3% of global enterprises have embraced continuous integration and deployment within their DevOps, signaling a pervasive trend toward hastening release cycles.
5. Johan Bergström
@jbergstroem
João Reis
@joaocgreis
Rod Vagg
@rvagg
Gibson Fahnestock
@gibfahn
Refael Ackermann
@refack
Phillip Johnsen
@phillipj
Rich Trott
@trott
Myles Borins
@thealphanerd
Kunal Pathak
@kunalspathak
Michele Capra
@piccoloaiutante
Hans Kristian Flaatten
@Starefossen
Michael Dawson
@mhdawson
Wyatt Preul
@geek
George Adams
@gdams
6. Johan Bergström
@jbergstroem
João Reis
@joaocgreis
Rod Vagg
@rvagg
Gibson Fahnestock
@gibfahn
Refael Ackermann
@refack
Phillip Johnsen
@phillipj
Rich Trott
@trott
Myles Borins
@thealphanerd
Kunal Pathak
@kunalspathak
Michele Capra
@piccoloaiutante
Hans Kristian Flaatten
@Starefossen
Michael Dawson
@mhdawson
Wyatt Preul
@geek
George Adams
@gdams
7. Johan Bergström
@jbergstroem
João Reis
@joaocgreis
Rod Vagg
@rvagg
Gibson Fahnestock
@gibfahn
Refael Ackermann
@refack
Phillip Johnsen
@phillipj
Rich Trott
@trott
Myles Borins
@thealphanerd
Kunal Pathak
@kunalspathak
Michele Capra
@piccoloaiutante
Hans Kristian Flaatten
@Starefossen
Michael Dawson
@mhdawson
Wyatt Preul
@geek
George Adams
@gdams
8. • Support for Node.js and other projects.
• Which requires:
• Wide platform coverage
• High availability of build farms.
• Automation and documentation to reduce bus factor
• We have no 24/7 on-call staff!
The Mission
• Build
• Test
• Benchmark
• Release
• Host
15. Tech we use
• Jenkins
• Cloud provisioning (e.g. OpenStack)
• Ansible
jenkins.io
16. Tech we use
• Jenkins
• Cloud provisioning (e.g. OpenStack)
• Ansible
jenkins.io
17. Tech we use
• Jenkins
• Cloud provisioning (e.g. OpenStack)
• Ansible
jenkins.io
18. Tech we use
• Jenkins
• Cloud provisioning (e.g. OpenStack)
• Ansible
jenkins.io
19. Tech we use
• Jenkins
• Cloud provisioning (e.g. OpenStack)
• Ansible
jenkins.io
20. Tech we use
• Jenkins
• Cloud provisioning (e.g. OpenStack)
• Ansible
21. Tech we use
• Jenkins
• Cloud provisioning (e.g. OpenStack)
• Ansible
22. Jenkins
• Public test CI:
• https://ci.nodejs.org
• Restricted release CI:
• https://ci-release.nodejs.org
• Material design theme!!!
23. Jenkins - teams
• Access controlled by Github teams
• Per-job accesses
• Core, ChakraCore, Libuv, CitGM
• Streams, Llnode, node-report
24. 30 minute builds
• Build and test on all platforms in 30 minutes.
• Easier on a LinuxOne server than a Raspberry Pi!
• Problem: need fast builds
• Solutions:
• File caching
• ccache
• Fanning
25. 30 minute builds
• Build and test on all platforms in 30 minutes.
• Easier on a LinuxOne server than a Raspberry Pi!
• Problem: need fast builds
• Solutions:
• File caching
• ccache
• Fanning
26. 30 minute builds
• Build and test on all platforms in 30 minutes.
• Easier on a LinuxOne server than a Raspberry Pi!
• Problem: need fast builds
• Solutions:
• File caching
• ccache
• Fanning
27. 30 minute builds
• Build and test on all platforms in 30 minutes.
• Easier on a LinuxOne server than a Raspberry Pi!
• Problem: need fast builds
• Solutions:
• File caching
• ccache
• Fanning
28. Ansible
• One command to set up any new machine
• Define our own scripts that others can rely on
• Want to build node? Go to:
• https://github.com/nodejs/build/tree/master/ansible
35. Jenkins pipelines
• Problem: we don’t have many pipeline experts
• Solution: amazing people from the community show up to help
out!
• Great way to get involved (nodejs/build#838).
38. The Dream
• Problem: How do we give people the confidence to fix
machines, in architectures they’re unfamiliar with?
• Solution: One-click “destroy and reprovision machine”
Fix Everything
39. Sponsors page
• Currently acknowledged on the Build WG README.
• Want to do something like
https://adoptopenjdk.net/sponsors.html
• Only basic html knowledge required! nodejs/nodejs.org#1257
• Get involved!
3
Hey everyone,
My name is Gibson, and today I’m going to talk about the Node Build Working Group. Where we are today, where we’re trying to get to, and how you can help.
If you have any questions, criticisms, can’t understand my accent, or spot any spelling mistakes during this talk, feel free to put up a hand at any point.
Apologies in advance for my voice, Stephen and Myles had some banging beats last night.
TODO(gib): Enunciate
Let’s start with my favourite part of the talk, talking about me.
My name is Gibson Fahnestock. I work for IBM in the Runtimes Node team. We do a lot of work in the community, and we also ship our own build of Node.js. We recently released a build of Node that runs on z/OS, and we’re working on getting that upstreamed into the community. So if any of you happen to have a mainframe at home, feel free to try running Node on it!
On the community side I’m a core collaborator, and I’m also involved in several of the working groups. If you’re not familiar with Node.js Working Groups, they’re basically specialised task forces that focus on key areas.
I’m in Build, Release, Moderation, and CitGM. If you want to learn more about any of these, or anything , feel free to find me online, or even in real life.
The talk is split into three easy-to-digest sections, and I’ve put little numbers at the top, in case you’re counting down the minutes.
The first is The Road so Far: how we currently operate and what we can do today.
The second is the Road Ahead, what we’re progressing towards, and all the stuff we want to do.
The last is how you can get involved. The world of devops can seem forbidding, but it’s actually a really great way to get involved with Open Source, and it’s easier than you think.
Unless you already think it’s easy, in which case … (feel free)
(in which case) … feel free to use this angry tweet template. I’ve actually included sample tweets, that could also be used as comments on the Hacker News, for easy flaming.
So this rogues gallery is the current membership of the Build WG.
There are two different groups of people in the team, the first … (is the oldtimers)
(the first) … is the oldtimers, these people have been in the build working group for aeons, they built the Node infrastructure up with their bare hands, and they are comfortable getting into the bowels of a machine and digging around to fix problems manually.
The second is the newer members. Everyone here joined the team in the last year. I’ve barely worked out where half the machines are so far.
One of the key changes we want to make is to reduce the barrier to entry, and make it easier for people to get involved.
The mission of the WG is to give the rest of the Node foundation everything they need to make sure Node runs everywhere. Kinda like Q branch in a Bond movie, we give you the gadgets, and you go save the world.
We provide the infra which allows Node core, and other top level projects like libuv, node-gyp, and llnode, to compile, run test suites and get benchmark results on a bunch of different platforms.
We also host nodejs.org, which contains some great node binaries, and there are also some docs or something.
There are 570 people in the node foundation, and over 100 collaborators. When you have this many users, high availability is pretty important. Giving a wider group the power to fix issues is key to maintaining an open source build farm, and part of this talk is about how you can manage that.
These are the amazing people who provide infrastructure for the foundation. Let’s just take a moment to thank everyone who helps us make sure Node runs everywhere.
We also have a bunch of Raspberry Pis that were donated by generous community members. It’s great to know that if you’re willing to donate, you too could have a Pi in Rod’s basement
And there they are, one day one of these could have your name on it
Our sponsors allow us to rack up some pretty impressive stats, we currently have over 165 build machines! We also cover a bunch of different platforms. Every PR is tested against almost 45 platforms!
Of course that sounds cooler when at least one of the builds are actually green.
That’s better.
All a new collaborator needs to know about running CI is that you go to a URL and fill in a form. However they can also dive into the individual jobs for more control.
There are three things we rely on for high availability. Jenkins
You know what, there’s actually a problem with this slide. Jenkins is just not inspiring enough.
Okay, that’s scary
Even worse
Cute, but not crazy enough
Okay, getting good
There we are, that’s what Jenkins should look like.
So we use Jenkins for job management, Cloud provisioning for super-fast machine creation,
and Ansible for machine configuration.
The first part is Jenkins. I’m just going to leave that guy there. If you maintain an open-source project you probably use Travis, and Travis is great.
But when you need more manual control, and when you want to support a wider range of machines, sometimes you need the raw power and Java heap space errors that only Jenkins can offer.
We have two Jenkins instances, a public one anyone can go see, and a private one where we do all the top-secret stuff I’m not allowed to talk about.
Oh, and naturally the most important feature is the UI, so no expense was spared on the theme.
Keeping lists of people co-ordinated is a pain, so we just use the Github teams we already have to give each Node group access to their own jobs.
When you’re running build and test for each PR 45 times, and you’re getting 30-50 builds a day, that’s around 2,000, it’s pretty important to be quick.
If a machine goes down and we don’t have a spare, everything stops, leading to complaints.
One thing that really makes a difference is to cache everything you can. Cache your downloads and git clones, and use this great tool called ccache to cache compilation results, so if you compile something that is 99% the same, you only have to recompile what you changed.
The other thing we do … (is fanning).
(The other thing we do) … is fanning, cause computers get hot too.
Okay, mandatory GIF out the way, it is pronounced JIF by the way, I’m happy to debate that with anyone afterwards.
So fanning is when you split out a build and test to run on multiple machines in parallel. So yes, your CI runs may finish really quickly on your mainframes, but on your 1st generation Raspberry Pis they can take a bit longer.
But hey, Pis are cheap, and Rod’s basement is large, so we can have plenty of them.
We also want to make the onboarding easier for new Build team members (beards are not required by the way).
And one of the key ways we do this is with Infrastructure as Code. If you haven’t used Ansible before, it’s a way of automating machine setup and configuration. Basically it’s like the set of bash scripts you have to set up your machines, but much much more complicated and full-featured.
The other great thing about this is that anyone can set up their own machines to build and run Node with the same scripts. I mentioned that we do our own builds of Node at IBM, well we’re working on making our machine configuration use the community one.
And now we come to the second part of our talk, the quest.
We want to reduce the barrier to entry, and increase the pool of people who can fix Node infra issues.
So, this is the job that builds Node releases. I'll give you time to read through this. Editing this file is a bit like coding in the dark, you’re wandering around a job that goes on … (and on)
(goes on) … and on .
Another problem with Jenkins was that you had to enter the configuration information in the job itself. Everything is stored in a giant XML file, which is pretty hard to read, and pretty much impossible to edit.
As a general rule, if it’s not code stored in a Git repo with Pull Requests, it’s invisible (and it rots).
Fortunately the folks at Jenkins have been working hard on a solution.
It’s called pipelines. A pipeline allows you to put all the configuration in a Jenkinsfile stored in Git, basically like a travis.yml on steroids.
It also allows us to open up our jobs for anyone to contribute to.
Visibility is important, take this code for example. Can you spot the issue with it?
I’ll give you a hint, it works fine now, but it’s going to start to cause problems around April next year.
The issue is that the NODE_VERSION code just takes the first character from the Node version, so Node 10.0.0 will become Node 1.x
The point of this isn’t to shame the person who wrote it, the point is that with enough eyeballs, all bugs are shallow.
Also there’s no git blame, so I can’t find out who wrote it to shame them.
So, the problem with Jenkins pipelines is that they’re new technology, no-one in the current build team has much experience with them. So this is where you come in. If you’ve used pipelines before, or you’re willing to learn, then please come and get involved.
Shout-out to Jon, who magically showed up and raised this issue a week after we decided in a meeting that we should probably look at this pipeline stuff. Help with this would be really amazing, so come talk to us.
Imagine that you’re a node collaborator like Brian, and you get this error when you try to run CI on your Pull Request. What do you do?
If you’re an experienced sysadmin you would ssh into the machine and … (do stuff)
(and) … do stuff until it’s fixed.
But what if there was a better, no prior knowledge solution?
Ansible scripts, especially when run through a graphical interface like Ansible Tower, allow you to simplify most problems down to “Click button to reprovision machine”
Not all the stuff we do is managing machines. One of the things we want to do is have a really nice Sponsors page on the website, to properly thank our sponsors. Being in the Build WG readme is pretty great, but how many people have actually seen the build WG readme?
This is something where a frontend developer could probably make something really nice in five minutes, whereas I’d flail around for half an hour and make some monstrosity.
This is an example of something another open-source project did. It looks really professional, and it’d be great to have something like that.
By the way, if you haven’t tried out the new superfast firefox nightly, I recommend checking it out.
So, to wrap up there’s loads of really exciting stuff we’re doing at the Build working group, and we’d really like your help.