This document discusses Bakson's efforts to implement continuous integration, delivery, and deployment practices for Ticketmaster's API team. It outlines the tools used such as Gitlab, Jenkins, SonarQube, Nexus, Rundeck, and Gatling. Automation is triggered upon code commits to run tests and deploy to environments. Testing occurs for each microservice rather than all services at once. This allows faster feedback loops while deploying features. The goal is to deploy to production continuously while ensuring quality and stability.
The document discusses the evolution of infrastructure tools at Seven Bridges from a manual process to tools like Context, Minion, and Vayu. Originally configurations were kept manually in Git and environments were difficult to configure. Context was created to version and clone configurations. Minion provided a uniform way to manage services and Vayu kept data on development environments and provided a web interface. These tools emerged to address needs rather than being pre-planned, and helped move infrastructure processes into production.
SD DevOps Meet-up - Jenkins 2.0 and Pipeline-as-CodeBrian Dawson
This is a presentation given at the March 16th San Diego DevOps Meet-up covering some of the upcoming activities around Jenkin 2.0 and the Pipeline plugins which provide for Pipeline-as-Code and enable Jenkins with 1st class pipelines and stages.
Victor Morales presented on implementing continuous integration and continuous delivery (CI/CD) using OpenStack tools. He described selecting components like Jenkins for automation, Gerrit for version control, and Redmine for bug tracking. The solution deployed these components on an OpenStack infrastructure using Terraform for automation. Future plans included improving security and supporting more cloud providers.
Continuous Integration With Jenkins Docker SQL ServerChris Adkin
This document discusses using containers and Jenkins for continuous integration and deployment pipelines. It provides an overview of build pipelines, how they can be implemented as code in Jenkins using scripts or declarative syntax. Demonstrations are shown for simple webhooks, multi-branch pipelines, using build slaves in containers, image layering, and fully containerizing a build environment. Tips are provided on constructing Dockerfiles and using timeouts.
Continuous delivery is the process of automating the deployment of code changes to production. It involves building, testing, and deploying code changes through successive environments like integration, testing, and production. Continuous integration starts the process by automatically building and testing code changes. The release pipeline then automates deploying through environments. This finds issues early and allows for rapid deployment of code changes to production through automated testing and infrastructure provisioning.
Automate App Container Delivery with CI/CD and DevOpsDaniel Oh
This document discusses how to automate application container delivery with CI/CD and DevOps. It describes building and deploying container images using Source-to-Image (S2I) to deploy source code or application binaries. OpenShift automates deploying application containers across hosts via Kubernetes. The document also discusses continuous integration, continuous delivery, and how OpenShift supports CI/CD with features like Jenkins-as-a-Service and OpenShift Pipelines.
Integrating Git, Gerrit and Jenkins/Hudson with MylynSascha Scholz
This document discusses integrating Git, Gerrit, and Jenkins/Hudson with the Mylyn task-focused interface in Eclipse. It describes each tool and common workflows. Mylyn allows viewing only tasks relevant to the developer's current context to avoid information overload. The document demonstrates integrating these tools with Mylyn and encourages contributing to related open source projects.
The document discusses the evolution of infrastructure tools at Seven Bridges from a manual process to tools like Context, Minion, and Vayu. Originally configurations were kept manually in Git and environments were difficult to configure. Context was created to version and clone configurations. Minion provided a uniform way to manage services and Vayu kept data on development environments and provided a web interface. These tools emerged to address needs rather than being pre-planned, and helped move infrastructure processes into production.
SD DevOps Meet-up - Jenkins 2.0 and Pipeline-as-CodeBrian Dawson
This is a presentation given at the March 16th San Diego DevOps Meet-up covering some of the upcoming activities around Jenkin 2.0 and the Pipeline plugins which provide for Pipeline-as-Code and enable Jenkins with 1st class pipelines and stages.
Victor Morales presented on implementing continuous integration and continuous delivery (CI/CD) using OpenStack tools. He described selecting components like Jenkins for automation, Gerrit for version control, and Redmine for bug tracking. The solution deployed these components on an OpenStack infrastructure using Terraform for automation. Future plans included improving security and supporting more cloud providers.
Continuous Integration With Jenkins Docker SQL ServerChris Adkin
This document discusses using containers and Jenkins for continuous integration and deployment pipelines. It provides an overview of build pipelines, how they can be implemented as code in Jenkins using scripts or declarative syntax. Demonstrations are shown for simple webhooks, multi-branch pipelines, using build slaves in containers, image layering, and fully containerizing a build environment. Tips are provided on constructing Dockerfiles and using timeouts.
Continuous delivery is the process of automating the deployment of code changes to production. It involves building, testing, and deploying code changes through successive environments like integration, testing, and production. Continuous integration starts the process by automatically building and testing code changes. The release pipeline then automates deploying through environments. This finds issues early and allows for rapid deployment of code changes to production through automated testing and infrastructure provisioning.
Automate App Container Delivery with CI/CD and DevOpsDaniel Oh
This document discusses how to automate application container delivery with CI/CD and DevOps. It describes building and deploying container images using Source-to-Image (S2I) to deploy source code or application binaries. OpenShift automates deploying application containers across hosts via Kubernetes. The document also discusses continuous integration, continuous delivery, and how OpenShift supports CI/CD with features like Jenkins-as-a-Service and OpenShift Pipelines.
Integrating Git, Gerrit and Jenkins/Hudson with MylynSascha Scholz
This document discusses integrating Git, Gerrit, and Jenkins/Hudson with the Mylyn task-focused interface in Eclipse. It describes each tool and common workflows. Mylyn allows viewing only tasks relevant to the developer's current context to avoid information overload. The document demonstrates integrating these tools with Mylyn and encourages contributing to related open source projects.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
A CI/CD Pipeline to Deploy and Maintain OpenStack - cfgmgmtcamp2015Simon McCartney
An intro into the pipeline & related tools we built to build a CI/CD pipeline for building and maintaining a package based OpenStack installation, with realistic, portable multi-machine development environments.
Container technology is shaping the future of software development and is causing a structural change in the cloud-computing world. Developers are embracing container technology and enterprises are adopting it at an explosive rate. Containers are portion of "IT" in technology as they're a very powerful tool which streamline your development and ops processes, save company's money & make life for developers much easier.
Master Continuous Delivery with CloudBees Jenkins Platformdcjuengst
This document discusses the CloudBees Jenkins Platform for continuous delivery. It begins by outlining challenges that organizations face as their use of open source Jenkins grows. It then introduces the CloudBees Jenkins Platform as an enterprise-grade solution for Jenkins that provides features like high availability, security, scalability, and expert support. The document explores various components of the CloudBees Jenkins Platform, including CloudBees Jenkins Enterprise, support for cloud and containers, continuous delivery capabilities, and tools for monitoring and management at scale.
[RHFSeoul2017]6 Steps to Transform Enterprise ApplicationsDaniel Oh
The document provides a 6 step approach to transforming enterprise applications:
1. Re-organizing to DevOps;
2. Implementing self-service, on-demand infrastructure;
3. Automating deployments using tools like Puppet, Chef, and Kubernetes;
4. Establishing continuous integration and deployment pipelines;
5. Adopting advanced deployment techniques like blue-green deployments;
6. Moving to a microservices architecture.
The document discusses principles of continuous integration including version control, automation, and testing. It describes a basic continuous integration/continuous delivery (CI/CD) pipeline with stages for committing code, compiling, testing, and deploying to environments like acceptance, capacity, and production. Jenkins is presented as a tool for implementing CI/CD pipelines through automated jobs that can pull code, build, test, analyze, and deploy software.
This document discusses Jenkins Pipeline and continuous integration/delivery practices. It defines continuous integration, continuous deployment, and continuous delivery. It also discusses the benefits of using Jenkins Pipeline including open source, plugins, integration with other tools, and treating code as pipeline. Key concepts discussed include Jenkinsfile, declarative vs scripted pipelines, stages, steps, and agents. It demonstrates creating a simple pipeline file and multibranch pipeline.
This document discusses different types of continuous integration (CI) pipelines. It begins by describing staging CI, where jobs are triggered on new commits, and issues can arise if the build breaks. It then covers gating CI, used by OpenStack, where code is reviewed and tested before being merged without broken builds. Finally, it discusses doing CI yourself using open source tools like Gerrit, Zuul and Jenkins, alone or via the pre-built Software Factory project. The conclusion is that gating CI prevents broken masters and these techniques can be reused for one's own projects.
Voxxed Luxembourd 2016 Jenkins 2.0 et Pipeline as codeDamien Duportal
Né Hudson en 2004 (cf. http://kohsuke.org/2011/01/11/bye-bye-hudson-hello-jenkins/), le projet Jenkins vient de franchir un cap majeur : la version Jenkins 2.0 (cf. https://groups.google.com/forum/#!msg/jenkinsci-dev/vbXK7JJekFw/BlEvO0UxBgAJ) !
Cette étape majeure réussit à concilier la gestion de l'ancien, et la transition vers des pratiques de déploiement continu plus modernes.
Parmi les nouveautés, la gestion des Pipeline-as-a-Code et l'intégration de Docker sont deux éléments dont vous allez pouvoir tirer de nombreux bénéfices.
Si vous êtes intéressés pour un exemple concret de migration depuis un Jenkins 1.x vers un flux basé sur Docker et Pipeline avec Jenkins 2.0, cette session est faite pour vous !
L'exemple suivi sera un projet Java-Maven "type", stocké sur un dépôt Git, bénéficiant de tests et d'analyses, en "multi-job enchaînés", que nous ferons glisser dans un "Jenkins Pipeline", configuré via un fichier du dépôt Git, en mode "livraison continue" via Docker.
This document discusses DevOps workflows using OpenShift and ManageIQ. It describes using GitLab for source code management, CI/CD, and collaboration. OpenShift is used as a platform for deploying and managing containerized applications. ManageIQ orchestrates provisioning of the DevOps tools including FreeIPA for authentication, GitLab, and OpenShift. The ecosystem is integrated through a CI/CD pipeline that builds, tests, reviews, and deploys code changes from a Git repository to OpenShift.
>>> View this presentation online at http://github-service-universe.kimminich.de/ <<<
PDF version of the slide deck for my JavaLand 2015 talk "All-round careful Software Development with GitHub Services"
Tools for unit testing, building applications, analyzing software quality and planning release scopes are an essential aspect of modern software development. With GitHub and "pluggable" external services there are lots of options to move these aspects into "the Cloud". For open source projects this is a viable alternative to on-premise solutions. In this talk I will present and demonstrate the CI lifecycle of some of my recent projects hosted on GitHub where I tried to integrate modern tools (e.g. Gradle, npm, bower) and external services (e.g. Travis-CI, Code Climate, Coveralls, HuBoard, AmazonSNS, NMA). The benefits and limitations of those services will be honestly illuminated. I am not affiliated with any of the providers mentioned, so this talk will not end up as a marketing show! Instead, the audience is supposed go out of this talk with some new things to try out with their own GitHub projects while hopefully being able to avoid some of the ramp-up difficulties.
This document provides instructions for setting up Gitlab and generating SSH keys. It demonstrates how to generate an SSH key pair, add the public key to Gitlab, clone a Gitlab project, commit changes and push commits to the remote repository. It also covers initializing Git flow and performing common Git and Gitlab tasks like creating branches, starting a release, and fetching from the remote.
Jenkins is a Continuous Integration (CI) server or tool which is written in Java. It provides Continuous Integration services for software development, which can be started via command line or web application server. Jenkins Pipeline is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.
This document provides an overview of the Play 2 Java framework, including:
- A brief introduction to Play and how it allows building web apps with Java and Scala in a lightweight, scalable way based on Akka
- A live coding demo showing building a basic app that retrieves user data from GitHub's API
- Discussion of deploying the demo app to Heroku cloud platform
- Recommendation to ask further questions later via email
Anatomy of a Continuous Integration and Delivery (CICD) PipelineRobert McDermott
This presentation covers the anatomy of a production CICD pipeline that is used to develop and deploy the cancer research application Oncoscape (https://oncoscape.sttrcancer.org)
DevOps and Continuous Delivery reference architectures for DockerSonatype
This document provides links to blogs and presentations about DevOps and Continuous Delivery practices using Docker from various sources. It includes over 25 references to external resources on topics like Docker Universal Control Plane, Continuous Delivery, clustering Jenkins, Docker introductions, monitoring deployments, Docker in build pipelines, and deploying containers to IBM Bluemix. The document promotes a one-day DevOps conference and offers a free private Docker registry and to share additional Docker reference architectures.
Slides from my presentation to the Sydney Jenkins Meetup on Declarative Pipeline. Video of the presentation available at https://www.youtube.com/watch?v=3R5xh4oeDg0&feature=youtu.be
Java and DevOps: Supercharge Your Delivery Pipeline with ContainersRed Hat Developers
The document discusses how containers can help supercharge a software delivery pipeline by allowing developers and operations teams to work more collaboratively. It notes that containers offer benefits like resource consolidation and use of developer-friendly tools. It also advertises an upcoming event on optimizing Java applications for microservices using MicroProfile standards.
Este documento presenta la autoevaluación de un grupo sobre su práctica en Ubuntu. Incluye una rubrica de evaluación con cuatro criterios y las respuestas del grupo a cada criterio. El grupo describe a Ubuntu como un sistema operativo de código abierto y gratuito, y explica tres características principales de Linux: es gratuito, universal y ofrece velocidad y estabilidad. Adjuntan imágenes de elementos de la interfaz de Ubuntu y una imagen que muestra la creación de cuatro directorios.
V Vinod Kumar is a strategic marketing professional with over 18 years of experience in marketing, sales, brand building, and business expansion. He has a proven track record of analyzing consumer behavior and formulating strategic sales plans. Currently he is the Operations Manager at Sridevi Associates India, overseeing 50 retail stores across Andhra Pradesh and Karnataka. Previously he held roles like Area Sales Manager and Store Manager for various footwear and apparel brands.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
A CI/CD Pipeline to Deploy and Maintain OpenStack - cfgmgmtcamp2015Simon McCartney
An intro into the pipeline & related tools we built to build a CI/CD pipeline for building and maintaining a package based OpenStack installation, with realistic, portable multi-machine development environments.
Container technology is shaping the future of software development and is causing a structural change in the cloud-computing world. Developers are embracing container technology and enterprises are adopting it at an explosive rate. Containers are portion of "IT" in technology as they're a very powerful tool which streamline your development and ops processes, save company's money & make life for developers much easier.
Master Continuous Delivery with CloudBees Jenkins Platformdcjuengst
This document discusses the CloudBees Jenkins Platform for continuous delivery. It begins by outlining challenges that organizations face as their use of open source Jenkins grows. It then introduces the CloudBees Jenkins Platform as an enterprise-grade solution for Jenkins that provides features like high availability, security, scalability, and expert support. The document explores various components of the CloudBees Jenkins Platform, including CloudBees Jenkins Enterprise, support for cloud and containers, continuous delivery capabilities, and tools for monitoring and management at scale.
[RHFSeoul2017]6 Steps to Transform Enterprise ApplicationsDaniel Oh
The document provides a 6 step approach to transforming enterprise applications:
1. Re-organizing to DevOps;
2. Implementing self-service, on-demand infrastructure;
3. Automating deployments using tools like Puppet, Chef, and Kubernetes;
4. Establishing continuous integration and deployment pipelines;
5. Adopting advanced deployment techniques like blue-green deployments;
6. Moving to a microservices architecture.
The document discusses principles of continuous integration including version control, automation, and testing. It describes a basic continuous integration/continuous delivery (CI/CD) pipeline with stages for committing code, compiling, testing, and deploying to environments like acceptance, capacity, and production. Jenkins is presented as a tool for implementing CI/CD pipelines through automated jobs that can pull code, build, test, analyze, and deploy software.
This document discusses Jenkins Pipeline and continuous integration/delivery practices. It defines continuous integration, continuous deployment, and continuous delivery. It also discusses the benefits of using Jenkins Pipeline including open source, plugins, integration with other tools, and treating code as pipeline. Key concepts discussed include Jenkinsfile, declarative vs scripted pipelines, stages, steps, and agents. It demonstrates creating a simple pipeline file and multibranch pipeline.
This document discusses different types of continuous integration (CI) pipelines. It begins by describing staging CI, where jobs are triggered on new commits, and issues can arise if the build breaks. It then covers gating CI, used by OpenStack, where code is reviewed and tested before being merged without broken builds. Finally, it discusses doing CI yourself using open source tools like Gerrit, Zuul and Jenkins, alone or via the pre-built Software Factory project. The conclusion is that gating CI prevents broken masters and these techniques can be reused for one's own projects.
Voxxed Luxembourd 2016 Jenkins 2.0 et Pipeline as codeDamien Duportal
Né Hudson en 2004 (cf. http://kohsuke.org/2011/01/11/bye-bye-hudson-hello-jenkins/), le projet Jenkins vient de franchir un cap majeur : la version Jenkins 2.0 (cf. https://groups.google.com/forum/#!msg/jenkinsci-dev/vbXK7JJekFw/BlEvO0UxBgAJ) !
Cette étape majeure réussit à concilier la gestion de l'ancien, et la transition vers des pratiques de déploiement continu plus modernes.
Parmi les nouveautés, la gestion des Pipeline-as-a-Code et l'intégration de Docker sont deux éléments dont vous allez pouvoir tirer de nombreux bénéfices.
Si vous êtes intéressés pour un exemple concret de migration depuis un Jenkins 1.x vers un flux basé sur Docker et Pipeline avec Jenkins 2.0, cette session est faite pour vous !
L'exemple suivi sera un projet Java-Maven "type", stocké sur un dépôt Git, bénéficiant de tests et d'analyses, en "multi-job enchaînés", que nous ferons glisser dans un "Jenkins Pipeline", configuré via un fichier du dépôt Git, en mode "livraison continue" via Docker.
This document discusses DevOps workflows using OpenShift and ManageIQ. It describes using GitLab for source code management, CI/CD, and collaboration. OpenShift is used as a platform for deploying and managing containerized applications. ManageIQ orchestrates provisioning of the DevOps tools including FreeIPA for authentication, GitLab, and OpenShift. The ecosystem is integrated through a CI/CD pipeline that builds, tests, reviews, and deploys code changes from a Git repository to OpenShift.
>>> View this presentation online at http://github-service-universe.kimminich.de/ <<<
PDF version of the slide deck for my JavaLand 2015 talk "All-round careful Software Development with GitHub Services"
Tools for unit testing, building applications, analyzing software quality and planning release scopes are an essential aspect of modern software development. With GitHub and "pluggable" external services there are lots of options to move these aspects into "the Cloud". For open source projects this is a viable alternative to on-premise solutions. In this talk I will present and demonstrate the CI lifecycle of some of my recent projects hosted on GitHub where I tried to integrate modern tools (e.g. Gradle, npm, bower) and external services (e.g. Travis-CI, Code Climate, Coveralls, HuBoard, AmazonSNS, NMA). The benefits and limitations of those services will be honestly illuminated. I am not affiliated with any of the providers mentioned, so this talk will not end up as a marketing show! Instead, the audience is supposed go out of this talk with some new things to try out with their own GitHub projects while hopefully being able to avoid some of the ramp-up difficulties.
This document provides instructions for setting up Gitlab and generating SSH keys. It demonstrates how to generate an SSH key pair, add the public key to Gitlab, clone a Gitlab project, commit changes and push commits to the remote repository. It also covers initializing Git flow and performing common Git and Gitlab tasks like creating branches, starting a release, and fetching from the remote.
Jenkins is a Continuous Integration (CI) server or tool which is written in Java. It provides Continuous Integration services for software development, which can be started via command line or web application server. Jenkins Pipeline is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.
This document provides an overview of the Play 2 Java framework, including:
- A brief introduction to Play and how it allows building web apps with Java and Scala in a lightweight, scalable way based on Akka
- A live coding demo showing building a basic app that retrieves user data from GitHub's API
- Discussion of deploying the demo app to Heroku cloud platform
- Recommendation to ask further questions later via email
Anatomy of a Continuous Integration and Delivery (CICD) PipelineRobert McDermott
This presentation covers the anatomy of a production CICD pipeline that is used to develop and deploy the cancer research application Oncoscape (https://oncoscape.sttrcancer.org)
DevOps and Continuous Delivery reference architectures for DockerSonatype
This document provides links to blogs and presentations about DevOps and Continuous Delivery practices using Docker from various sources. It includes over 25 references to external resources on topics like Docker Universal Control Plane, Continuous Delivery, clustering Jenkins, Docker introductions, monitoring deployments, Docker in build pipelines, and deploying containers to IBM Bluemix. The document promotes a one-day DevOps conference and offers a free private Docker registry and to share additional Docker reference architectures.
Slides from my presentation to the Sydney Jenkins Meetup on Declarative Pipeline. Video of the presentation available at https://www.youtube.com/watch?v=3R5xh4oeDg0&feature=youtu.be
Java and DevOps: Supercharge Your Delivery Pipeline with ContainersRed Hat Developers
The document discusses how containers can help supercharge a software delivery pipeline by allowing developers and operations teams to work more collaboratively. It notes that containers offer benefits like resource consolidation and use of developer-friendly tools. It also advertises an upcoming event on optimizing Java applications for microservices using MicroProfile standards.
Este documento presenta la autoevaluación de un grupo sobre su práctica en Ubuntu. Incluye una rubrica de evaluación con cuatro criterios y las respuestas del grupo a cada criterio. El grupo describe a Ubuntu como un sistema operativo de código abierto y gratuito, y explica tres características principales de Linux: es gratuito, universal y ofrece velocidad y estabilidad. Adjuntan imágenes de elementos de la interfaz de Ubuntu y una imagen que muestra la creación de cuatro directorios.
V Vinod Kumar is a strategic marketing professional with over 18 years of experience in marketing, sales, brand building, and business expansion. He has a proven track record of analyzing consumer behavior and formulating strategic sales plans. Currently he is the Operations Manager at Sridevi Associates India, overseeing 50 retail stores across Andhra Pradesh and Karnataka. Previously he held roles like Area Sales Manager and Store Manager for various footwear and apparel brands.
The document outlines plans for family, work, health, and holidays. It details making dinner for family arriving at 7 pm tonight, finishing accounting work in 2 hours, and making bill payments tomorrow morning. Health plans include starting exercise next week and eating more salad and less ice cream. Holiday plans involve going to the beach next Saturday with their husband and learning to dance champeta that night.
This fitbit alta review proves that fitness trackers can perform a way beyondEmma Taylor
Fitbit, unbeatable fitness tracker brand in the market from its birth till today. Though, numerous complaints have been posted over this particular brand, something special persists that makes the brand to sustain at the top. The prime gripe that strikes this model, not so stylish compared to other brands. As a solution, emerged Fitbit alta, stylish and comfort.
Este documento presenta una lista de 21 ejercicios prácticos sobre el uso de Windows para el curso de Informática (TICS) de la Universidad Nacional de Chimborazo. Los ejercicios incluyen crear accesos directos, carpetas y mover elementos entre ellas, configurar la barra de tareas, cambiar la resolución y tema de pantalla, y descargar temas. El documento proporciona instrucciones detalladas para que los estudiantes completen cada tarea asignada.
Aliviar el estrés gracias a la risoterapiaPhilip2112
El documento describe los conceptos de estrés y sus causas. El estrés surge de una falta de correspondencia entre las necesidades percibidas y los mecanismos disponibles para hacerles frente. Esto puede deberse a requisitos excesivos o autoimpuestos, o a percibir amenazas donde no las hay. Para reducir el estrés, hay que reevaluar los requisitos y mejorar los mecanismos de afrontamiento como la comunicación y el control de impulsos. El estrés puede manifestarse física y emocionalmente, con síntomas que van desde dolores de cabeza
Mohammed Abdul Jaleel is seeking a career opportunity and has over 15 years of experience in warehouse management and storekeeping roles. He has worked in warehousing for Saudi Aramco and a security systems company in Saudi Arabia. Previously, he worked for 7 years in sales and warehouse management for a telecom company in India. He has strong skills in inventory control, receiving and shipping, report generation, and staff supervision. He is married, holds a valid Saudi driving license and transferable iqama, and has intermediate and secondary education qualifications from India.
Gerardo J Ramos is seeking a customer service position with 4 years of experience as a supervisor/operator and banquet server. He has a background in radio and TV broadcasting and is currently studying that field. As a supervisor, he monitored agents, handled complaints, trained new hires, and ensured timely dispatching of calls. He also has experience serving guests, keeping communication with customers, and assisting with table setup and cleaning. Ramos is bilingual in English and Spanish with strong computer, leadership, and telephone customer service skills.
The Silicon Valley economy continues to thrive, with accelerating job growth, rising incomes, and increased innovation and entrepreneurship. However, housing and transportation challenges threaten the region's economic competitiveness and quality of life. Employment levels have surpassed pre-recession levels and are growing faster than any time since 2000. Unemployment rates have decreased across all groups. Incomes are rising for all racial and ethnic groups, though disparities remain. Public transit ridership is increasing along with continued expansion of businesses and services. Housing costs and lack of supply, as well as traffic congestion, pose serious issues.
Este documento presenta un plan de estudios para el curso de Informática (TICS) en la carrera de Licenciatura en Ciencias Sociales en la Universidad Nacional de Chimborazo. El curso cubre 36 lecciones sobre el uso del programa Excel, incluyendo cómo abrir y navegar en Excel, formato de celdas, tablas y hojas, sumas, funciones, gráficos e inserción y eliminación de filas, columnas y hojas. El documento concluye recomendando que Excel es un programa fácil de usar pero se debe manejar con cuid
A recent graduate with a Master's in Computer Science seeking a position as a web developer. He has experience developing websites including a job website and health prediction system. He has strong skills in HTML, CSS, JavaScript, and databases. He is also proficient in communication, problem solving, and teamwork.
GROM Associates is a global consulting firm providing strategy, SAP integration, and data analytics services for over 30 years. It utilizes a collaborative approach and proven processes to deliver high ROI solutions. GROM is an experienced SAP partner with certifications in implementation, support, and quality management. It offers services including implementations, upgrades, analytics, and on-demand support through workforce augmentation and turn-key projects. GROM's experience and collaborative approach helps clients succeed with their SAP initiatives and analytics goals.
The document describes a visualization system for medical data that was designed to intuitively display examination results and enhance readability of data. It includes two parts: regional summary and individual visualization. The individual visualization part allows users to view: 1) an individual's health trend over years via line charts, 2) their overall health status using a "fingerprint" model, and 3) individual health summaries and disease connections via bar charts. The system was developed using Python, Django, D3.js, jQuery, and MySQL and utilizes geriatric medical and cause of death datasets.
Este documento es un registro académico que detalla que Erika Chisaguano, estudiante del primer semestre "A" del año lectivo 2015-2016 en la Universidad Nacional de Chimborazo, Facultad de Ciencias de la Educación, Humanas y Tecnologías, Carrera de Licenciatura en Ciencias Sociales, tomó la asignatura de Informática (TICS) impartida por el Licenciado Fernando Guffante.
Continuous integration and delivery for java based web applicationsSunil Dalal
This document discusses continuous integration and delivery for Java web applications using Jenkins, Gradle, and Artifactory. It defines continuous integration and delivery and explains why they are important. It outlines the workflow and steps involved, including using source control, building and testing with Jenkins and Gradle, storing artifacts in Artifactory, running code analysis with tools like SonarQube, and deploying to test and production. Finally, it addresses some common questions around plugins, versioning, rollbacks, and build frequency.
DevOps Continuous Integration & Delivery - A Whitepaper by RapidValueRapidValue
In this whitepaper, we will deep dive into the concept of continuous integration, continuous delivery and continuous deployment and explain how businesses can benefit from this. We will also elucidate on how to build an effective CI/CD pipeline and some of the best practices for your enterprise DevOps journey.
This document provides guidance on starting and managing a Jenkins continuous integration (CI) system from scratch. It discusses both the psychological and technical aspects of setting up Jenkins, including installing plugins for jobs, control flow, additional functionality, and administration. Specific plugins are highlighted for tasks like triggering jobs, passing artifacts between jobs, and cleaning workspaces. The document emphasizes clear communication, identifying issues, and leaving time for unexpected problems when starting a new Jenkins system.
DevEx aims to improve the developer experience by focusing on tooling, technologies, and documentation within a DevOps environment. This includes adopting integrated toolchains that streamline the development lifecycle through automation and by ensuring tools are well-tested, configurable, and have comprehensive documentation. The goal of DevEx is to create an optimal software production environment by minimizing friction between development, testing, and operations teams through collaboration, shared tools, and improved processes.
#ATAGTR2019 Presentation "DevSecOps with GitLab" By Avishkar NikaleAgile Testing Alliance
Avishkar Nikale who is Senior Technical Architect at LTI took a Session on "DevSecOps with GitLab" at Global Testing Retreat #ATAGTR2019
Please refer our following post for session details:
https://atablogs.agiletestingalliance.org/2019/12/06/global-testing-retreat-atagtr2019-welcomes-avishkar-nikale-as-our-esteemed-speaker/
This document provides information on Jenkins, including:
- Jenkins is an open source automation tool that allows continuous integration and delivery of software projects. It builds, tests, and prepares code changes for release.
- Key benefits of Jenkins include speeding up the software development process through automation, integrating with many testing and deployment technologies, and making it easier for developers to integrate changes and users to obtain fresh builds.
- Jenkins uses plugins to integrate various DevOps stages like build, test, package, deploy, etc. It supports pipelines to automate development tasks.
A very big thank you to Michael Palotas from Grid Fusion & eBay International for taking the time and effort to travel across the globe to present at the Australian Test Managers Forum 2014. If you would like any information on TMF please email tmf@kjross.com.au
Integration Group - Lithium test strategyOpenDaylight
This document outlines an integration and test strategy for OpenDaylight projects. It proposes testing individual features in isolation and together to check for interference. Tests would run automatically through continuous integration on code changes or merges between projects. Features would be tested by installing them individually and with a set of compatible features to test integration. Metrics like code coverage, bugs, and performance would help evaluate quality.
Adrian marinica continuous integration in the visual studio worldCodecamp Romania
This document discusses continuous integration practices in the Visual Studio world. It defines continuous integration as integrating work frequently, usually daily, with automated testing to quickly detect errors. Successful continuous integration requires maintaining a single source repository, automating builds and testing, committing to the mainline daily, and automating deployment. Examples are given of companies that continuously integrate and deploy many times a day. Challenges include the effort required to set up systems to enable continuous integration and ensure sufficient test coverage. Release management with Visual Studio and Team Foundation Server is discussed as a way to facilitate deploying to multiple environments.
Action! Development and Operations for Sticker ShopLINE Corporation
The document discusses development and operations practices for LINE's sticker shop products. It covers their use of a single GitHub repository for source code management across multiple products, weekly releases to production, monitoring with Zipkin and Micrometer, load balancing, outage handling procedures including on-call rotations and postmortem reports, and their inspiration from Site Reliability Engineering practices.
CI (continuous integration) is fundamental for agile deployment and should be improved step-by-step. Issue tracking tools like JIRA, Redmine, and spreadsheets can manage requirements and tasks. Code review is an important part of the coding process and helps improve code quality and mentorship. Distributed version control systems like Git provide more flexible workflows than centralized ones and integrate well with code review tools like Gerrit.
Devops - Continuous Integration And Continuous DevelopmentSandyJohn5
This document discusses DevOps CI/CD practices. It begins by introducing DevOps and contrasting it with traditional Waterfall and Agile methodologies. The document then covers the various stages of CI/CD pipelines including continuous development, continuous testing, continuous integration, continuous deployment, and continuous monitoring. It discusses tools used at each stage like version control systems, testing frameworks, CI servers, configuration management, containers, and monitoring tools. Specifically, it provides details on tools like Git, Jenkins, Selenium, Puppet, Docker, Splunk and the ELK stack.
Continuous Integration/Deployment with Gitlab CIDavid Hahn
This document discusses continuous integration/deployment with Gitlab CI. It provides an introduction and overview of continuous integration, continuous delivery, and deployment. It then discusses Gitlab and Gitlab CI in more detail, including stages and pipelines, the UI, runners, using CI as code, and examples for Node.js + React, Java + Angular, and Electron applications. The sources section lists links and image sources for additional information.
KubeCon EU 2022 Istio, Flux & Flagger.pdfWeaveworks
Distributed Proxies have opened the floodgates for Service Meshes to provide substantial value at the Application Networking Layer, but early adopters of Service Meshes are often overwhelmed by operational complexities. How do you ensure that the proxy is distributed everywhere your software runs? How do you safely upgrade or roll back all those proxies? How can you ensure that your network config is correct - without pushing it to production and risking an outage? Following the GitOps Principles is key to simplifying Service Mesh Operations. Defining the entire service mesh declaratively - be it installation, proxy injection, or configuration - provides a mechanism to safely manage the complexities of a service mesh. Continuously reconciling declarative config with the latest service mesh release keeps you from being caught off-guard by CVEs. Progressive Delivery tools enable seamless movement from one version of a service mesh to another - and back - with minimal impact to traffic.
From 0 to DevOps in 80 Days [Webinar Replay]Dynatrace
From 0 to DevOps in 80 Days
Link to the webinar replay: https://info.dynatrace.com/apm_dtm_ops_17q3_wc_from_enterprise_tocloud_native_na_registration.html
“Innovate or die” may sound extreme, but it’s the only way to thrive in today’s ever competitive market. Bernd Greifeneder, CTO of Dynatrace, wanted to ensure that the company was relevant 5 years from now so he formed an internal incubator with one goal: transform Dynatrace into a Cloud Native DevOps organization.
The incubator focused on what the company needed to do in order to integrate nascent cloud technologies so that they wouldn’t be left in the dust when the inevitable tipping point to cloud arrives. Transforming into a cloud native company would allow for rapid release cycles and provide an embedded feedback loop.
The Results: Dynatrace now has a 99.998% availability of SaaS Service and can deploy changes within an hour if necessary. In parallel, a new SaaS and managed offering is released every 2 weeks with 170 production updates per day.
Watch this recorded webinar as Bernd Greifeneder shares the lessons learned moving Dynatrace from an on-prem company to one that is cloud native.
Bernd discusses:
• The driving factors that led to the transformation
• The goals that were set back in 2011 towards the engineering team
• How to sell such a transformation project in a large enterprise organization
• How to support this multi-year project from top down without impacting regular operations
• What's next on the innovator's mind
Covering topics like:
CI CD DevOps Jenkins TFS TeamCity Compile Test Package Delpoy
See Disclaimer in the last slide and/or in file comments, if available.
Simplified CI/CD Flows for Salesforce via SFDX - Downunder Dreamin - SydneyAbhinav Gupta
These slides were focused on Down Under Dreamin event, where a good emphasis was given on explaining CI/CD to wider audience including admins, who are not very comfortable with it.
Followed by an overview and walk-thru of Bitbucket setup process via SFDX
Another day, another buzzword in the world of software development! ‘Microservices’ is a new approach to structuring server-side software. But is it really new? In this talk I’ll walk you through the birth and ‘raison d’etre’ of microservices and tell about pro’s and con’s of the approach.
Having laid the foundation, we will take a look at best-practices and patterns for building micro service architectures and combine this with a tour of current technologies and development tools.
Finally, I will take a quick look at the future and discuss some of the remaining challenges. All parts of the presentation will be accompanied by structural examples based on a real ecommerse system.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
2. A bit of context…
2
Global ticket sales and distribution company.
A cliche, but the global leader in it’s line of business.
Large IT operation.
Engineering HQs in Los Angeles and London.
More than 150 platforms/products.
Both legacy stuff and edge technologies.
Ticketmaster
Belgrade based IT company.
Ticketmaster’s development centre.
Currently around 50 people, only engineering.
Mainly Java projects.
Strong in local Scala community.
Bakson
3. A bit of context…
3
Why emphasise the “quality” ?
4. A bit of context…
4
Each high priority production bug (Business Disruption) can be
directly linked to and measured in money loss.
Bug? Fans can’t purchase the tickets.
Bug? Fans can’t enter the venue.
Because the Business people
5. A bit of context…
5
Because the fans
Entire Adele’s European tour was sold out in two days, in less than 15 minutes per day.
Huge success. But…
7. A bit of context…
7
How can DevOps help teams?
And how to move “there”?
8. A bit of context…
8
Last phase targets: Canary release, Chaos Monkey, etc.
DevOps Maturity Model
Company wide initiative.
Assessed by Gartner.
18 categories - “Deployment”, “Support”, etc.
Products are required to “move” through the matrix.
Progress is constantly evaluated.
Additional benefits: standardisation, guidance.
9. Public API
9
HTTP service. Not RESTful.
Close to 100 endpoints/actions.
2 years live in production.
Development + QA team size = 10 people
10. Public API
10
Distributed architecture (microservices).
Java stack.
Storages: relational, NoSQL, search engines…
APIGEE as management layer.
Each microservice has it’s on source code repository.
11. Issues list
11
A week of testing upon release development is completed.
Long lasting regression campaign1
Only going to shared environment after entire release is developed/completed.
Late integration with clients2
Variety of tools. Or even manual. Procedures differ from env to env.
Non-standardised deploy procedures3
Automated testing on entire release, also clients are testing only entire release build.
Difficult to pinpoint a root cause of broken functionalities4
12. The goal
12
Start automation on feature completion (code pushed to repository)
Run
Unit
Tests
Do Static
Code
Analysis
Build
& Save
Package
Deploy
Check
Service
Status
Run
Integration
Tests
Send
Reports
13. Tool - Gitlab
13
Git repository management tool.
Many additional features: code review, continuous integration, deploy…
On premise or SaaS.
Free and Commercial editions.
In our flows first point since via webhooks, upon code push,
the next tool in the flow is triggered.
(Note: our first AWS-based service is utilising CI on the Gitlab. But that is WiP.)
14. Continuous Integration
14
Start automation on feature completion (code pushed to repository)
Run
Unit
Tests
Do Static
Code
Analysis
Build
& Save
Package
Deploy
Check
Service
Status
Run
Integration
Tests
Send
Reports
Continuous Integration (CI) is a development practice that requires developers to integrate code
into a shared repository several times a day. Each check-in is then verified by an automated build,
allowing teams to detect problems early.
- Martin Fowler
15. Tool - Jenkins
15
Automation server.
Gets additional power from numerous plugins available.
Open source. Available only as on premise.
Main unit is “job”.
In TM jobs can be created only through code repository. Creation via GUI is disabled.
Two configuration XML files are part of the application code.
Reasoning:
- (distributed) versioning
- easy to restore in case of issues with Jenkins server
- easy to migrate between Jenkins instances
17. Tool - SonarQube
17
Platform for continuous inspection of code quality.
More than 20 programming languages are covered.
Open source. Available only as on premise.
Some TM teams are failing Jenkins job on code quality violations.
API team is reviewing reports per Sprint/Release
Using FindBugs as a plugin.
18.
19.
20. Tool - Nexus
20
Artifact repository.
Free or Commercial. Available only as on premise.
OOTB providing support for multiple platforms (Java, NPM, Docker…).
TM instance is locked for manual upload of artifacts.
Only Jenkins instances can upload, through predefined Release plugin.
Support for release process, only “promoted” artifacts are available for Production deploy.
API team is reviewing reports per Sprint/Release.
23. GitFlow
23
Branching model, introduced by Atlasssian.
Feature/task is merged to “develop”
on completion (as by “Definition of Done”).
“Release” branch is created on demand.
“Release” is merged to “master”
when ready for production.
This helps answering “When?”. On merge to “develop”.
24. Where?
24
The major problem…
…is not developing tests
…it’s not creating environments
…it’s not even about automating the whole thing.
IT’S ALWAYS DATA.
25. Data setup
25
Because you (usually) can’t control data
in your dependencies.
Easier to initially develop.
Difficult to maintain.
Tracking evolution of dependencies.
Allows easier setup of testing environemnts.
Use mocks
It allows testing in “real” environment.
Difficult to initially develop.
Easier to maintain, since owners of your data
will have to migrate it together with rest of their data.
Permanent data sets
We decided to go with permanent sets!
There is a creation tool available on TM backends.
26. API environments
26
DEVs TPI Production(s)QAs Stage
CAP
Stage and Production have SLAs defined.
Mapping to Gitflow:
“develop” -> TPI, “release” -> Stage, “master” -> Prod
27. Where?
27
Each service (should) have it’s own integration tests.
Test everything !!!
But for API it is crucial that on Gateway
“everything works”.
28. Tool - Rundeck
28
Tool for runbook automation and execution of arbitrary management tasks.
Open source. Available only as on premise.
Is Rundeck even needed if you already use Jenkins?
- “Rundeck is made for Operations
and knows about the details of your environments.”
- “Jenkins is fundamentally not a deployment tool,
although it can be used like one.”
29. QA framework
29
Separate project. Own source-code repo.
Implemented in Java. Maven project.
Uses standard HTTP clients and Java testing libs (JUnit, TestNG).
Used for functional testing.
Blackbox testing of our services (no DB access, log checks…)
Smoke suite: ~1.000 tests, ~5 mins to execute
Regression suite: ~10.000 tests, ~35 mins to execute
Every feature or bug we ever had is included in the regression suite.
We are constantly supporting 2 API versions with test suites covering both.
30. Implementation issues…
30
- New feature branching-out will result with identical copy of Jenkins XML configs.
- Jenkins plugins have limited support for conditional executions in some phases.
Limit Jenkins “job” only to be executed from “develop”1
- Another set of conditionals/variables to be set/passed between jobs.
QA “job” only to be triggered by service’s “develop”2
- Only way to cover all cases/features is to always deploy and test all service.
Know services that are involved in feature3
31. Try with job chaining
31
Standard Jenkins "freestyle" jobs support
simple sequential tasks execution.
Doesn’t work in our case.
- Git triggers would result in service restarts
while test execution is active.
Additional idea was to introduce additional branch
so that entire flow would not be triggered from “develop”.
- Additional work/thinking required from developers.
- Where to place “signal” that would trigger entire flow?
32. Try with plugins
32
“Closest” to what to we need found in “JobFanIn” plugin.
This plugin provides a watch on upstream projects
to trigger downstream projects
once all upstream projects are successfully build.
Doesn’t work in our case.
- Impossible to predict on which services will feature reside.
33. Step back. Rethink.
33
Do we really need to deploy and test everything always?
Does this approach actually fits microservices architecture?
LETS
SIMPLIFY.For each service only deploy and test itself.
Yes, developers will need to do additional thinking
when finishing feature that spans over multiple services.
34. Testing agreement
34
On merge to develop (as by Gitflow).
Deploy to live environment - TPI.
Use permanent data sets.
Each micro service (and gateway) will have accompanying QA framework.
Upon service deploy execute it.
If feature is on multiple microservices it will be on developers to sequence the testing.
35.
36.
37. The goal
37
Start automation on feature completion (code pushed to repository)
Run
Unit
Tests
Do Static
Code
Analysis
Build
& Save
Package
Deploy
Check
Service
Status
Run
Integration
Tests
Send
Reports
38. Deploy validation
38
Via healthchecks.
Internally exposed HTTP endpoints that provide
summary of dependencies’ and internal statuses.
Every product must implement this TM standard.
Response must be quick.
Healthcheck status is composed by background job.
Healthchecks are used in monitoring,
and by load-balancers.
Rundeck/Jenkins will fail job if healthcheck is negative.
41. CD vs CD
41
Continuous Delivery is about keeping your application in a state
where it is always able to deploy into production.
Continuous Deployment is actually deploying every change into production,
every day or more frequently.
- Martin Fowler
42. CD vs CD
42
Why not all the way to Production?
We (API) are only the half-product. - Vanja Radaković (Product Manager)
Even if all tests on API pass that doesn’t mean no functionality is broken on our clients.
We “sit” a week in Stage env, for sign-off from major clients,
between when release is ready and actually deployed to Production.
DEVs TPI Production(s)QAs Stage
43. Automating the security
43
Veracode is platform for application security scanning.
Commercial. Available only as SaaS.
We have added a branch that (via GitLab and Jenkins)
automatically uploads artifacts to Veracode.
Due to long-lasting scan this is not included
in regular flow on feature completion.
There are company-wide defined policies.
We are reviewing status once per Sprint/Release.
44.
45. Performance testing (WiP)
45
Running on dedicated environment.
Same topology (num. of servers) and data size as in production.
Our production data is imported on demand.
All of backend dependencies are mocked due to difficulties to provision data.
TPI we use for functional testing contains inconsistent and not-big-enough data.
Mocks are based on or logs from production.
API mocking tool - WireMock.
What if in need to mock something other than HTTP API, like storage?
Rethink your architecture.
46. Tool - Gatling
46
Load testing framework.
Open source.
Supports code written in Scala or Java.
Can be executed from command line.
Easy to integrate with Jenkins using the official plugin.
47.
48.
49. Performance testing ideas
49
Automate in a way similar to security scanning - new branch.
Jenkins to build.
Rundeck to deploy.
Gatling to execute tests.
Bonus:
Attach APM tooling that would provide insights during testing.
Currently evaluating New Relic and Ruxit.
50. Logging
50
Company standards to separate logs:
• application log
• payload log (inbound/outbound)
• performance log
Only application logs are indexed.
Others are available on servers for N days (depending on retention policy).
Unique “Correlation ID” that allows tracking of requests
through multiple services and all type of log files.
51. Tool - Splunk
51
Platform for operational intelligence.
Much more than log aggregation (searching, monitoring and vizualization).
On premise or SaaS.
Free and Commercial editions.
Our dashboards: relationships between HTTP errors (not application errors) and clients.
Our alerting: on detected deviation/increase in volume of errors.
52.
53.
54. Benefits we (Dev team) got
54
Less thinking for developers.
Quicker test and feedback cycles.
Automation on “feature completion”.1
Feeling very comfortable during production deploys.
Using same tools for all environments2
Being able to react quickly. Or even do preemptive actions.
Visibility of changes and metrics3
No need to “reinvent the wheel”.
Shared knowledge. Contributing to solutions.
Company initiatives as guidance4
55. Feel free to contact us:
office@bakson.rs
Thanks for listening!
Editor's Notes
Image downloaded from http://www.unsplash.com
Image is in Public Domain, so can be used for commercial purposes
Image downloaded from http://www.unsplash.com
Image is in Public Domain, so can be used for commercial purposes