Event: inovex Meetup "Let's talk about Docker!"
28.04.2016
Speaker: Arnold Bechtoldt
weitere Tech-Vorträge: https://www.inovex.de/de/content-pool/vortraege/
Software Testing in a Distributed EnvironmentPerforce
Distributed development across countries creates both challenges and opportunities for the production of high quality software. We’ll look at new ways of achieving automation for testing software in a continuous delivery context, using parallelization techniques and automated analysis fully integrated with a reliable and scalable SCM system. A new optimal method of testing common code in similar branches is presented along with the semantic merging of testing results.
Five Real-World Strategies for Perforce StreamsPerforce
Before you deploy Perforce Streams in your organization, you should have a plan in place. Get advice and hear the five strategies for using Streams and how to handle integration exceptions gracefully.
Meta Infrastructure as Code: How Capital One Automated Our Automation Tools w...Sonatype
George Parris III, Capital One
In many companies, the cornerstone of their continuous integration and continuous deployment strategy is a few, well known pieces of automation software that are absolutely vital to the way companies are building software these days using agile methodologies. Many times though, someone with some infrastructure experience will just spin up a server and install the packages, building and iterating upon that same install for the following years that they're using it, which puts them in a shaky place every time they have to make changes to it.
On the Online Account Opening project at Capital One, we’ve strived to maintain our entire infrastructure as immutable as possible. In doing so, it was decided that we should apply that principle to our core CI/CD automation tools as well. By using Config As Code, Implementing a useful backup and testing strategy, and utilizing some AWS capabilities, we’re able to make that happen.
Streams in Parallel Development by Sven Erik KnopPerforce
Perforce introduced Streams in 2011. Since then Streams have been adopted by the majority of new Perforce customers for all projects and by many existing customers for new projects. This is a brief overview of Streams and a deep dive into newer features that can help you with parallel and component based development.
Streams provide a flexible branching workflow that enforces best practices. It intelligently organizes code modules and branching policies. Streams ensure changes flow correctly and simplify common processes like merging. They increase agility and scalability. Required components are a 2011.1 Perforce server and P4V or P4 client. Streams will be available in August 2011 as part of beta releases.
The document summarizes a company's conversion of its embedded controller software from C to C++ over a two month period. It involved converting 8 projects with 30% shared code across 18 developers. Challenges included converting callbacks and dealing with scripting errors. Opportunities included improving code quality, team building, and evaluating new static analysis tools. The conversion was successful with minimal performance impacts and many bugs were found and fixed during the process. Future plans include C++ training and refactoring code to fully utilize C++ features.
Working with FME in an Agile Software Development LifecycleSafe Software
FME has often been demonstrated to accomplish a number of complex tasks on isolated Server and Desktop applications. However, is it easy to translate our experiences within our more comfortable GIS/Data Management environments, and work with FME in a multi-tiered, multi environment Agile software pipeline. Can FME be developed, tested, performance tested, version controlled and released iteration after iteration? Can we automate the deployment of the software? Can we automate the deployment of "code" (which in our case are workbenches)? Our presentation will demonstrate how we implemented FME as a successful contributor to a government digital exemplar within the UK. It is now live, operating 24/7 in secure hosted environments integrating with a multitude of different applications and databases. It has been configured to complete easy deployments with scripting, continuous integration tools and automation. We have made our ventures into implementing git-flow on our workbenches to allow agile team collaboration and code review whilst maintaining robust and resilient software.
Multi-Branched development with Git Source Code Managementdopejam
Developers work on feature branches off the mainline and merge their code into a QA branch designated by the release manager. Changes in the QA branch are tested and then merged into an integration branch for testing together. After integration testing passes, the build is deployed to the production environment. The development server and branches are obsolete after cutting over to the new process, but source control remains for reference.
Software Testing in a Distributed EnvironmentPerforce
Distributed development across countries creates both challenges and opportunities for the production of high quality software. We’ll look at new ways of achieving automation for testing software in a continuous delivery context, using parallelization techniques and automated analysis fully integrated with a reliable and scalable SCM system. A new optimal method of testing common code in similar branches is presented along with the semantic merging of testing results.
Five Real-World Strategies for Perforce StreamsPerforce
Before you deploy Perforce Streams in your organization, you should have a plan in place. Get advice and hear the five strategies for using Streams and how to handle integration exceptions gracefully.
Meta Infrastructure as Code: How Capital One Automated Our Automation Tools w...Sonatype
George Parris III, Capital One
In many companies, the cornerstone of their continuous integration and continuous deployment strategy is a few, well known pieces of automation software that are absolutely vital to the way companies are building software these days using agile methodologies. Many times though, someone with some infrastructure experience will just spin up a server and install the packages, building and iterating upon that same install for the following years that they're using it, which puts them in a shaky place every time they have to make changes to it.
On the Online Account Opening project at Capital One, we’ve strived to maintain our entire infrastructure as immutable as possible. In doing so, it was decided that we should apply that principle to our core CI/CD automation tools as well. By using Config As Code, Implementing a useful backup and testing strategy, and utilizing some AWS capabilities, we’re able to make that happen.
Streams in Parallel Development by Sven Erik KnopPerforce
Perforce introduced Streams in 2011. Since then Streams have been adopted by the majority of new Perforce customers for all projects and by many existing customers for new projects. This is a brief overview of Streams and a deep dive into newer features that can help you with parallel and component based development.
Streams provide a flexible branching workflow that enforces best practices. It intelligently organizes code modules and branching policies. Streams ensure changes flow correctly and simplify common processes like merging. They increase agility and scalability. Required components are a 2011.1 Perforce server and P4V or P4 client. Streams will be available in August 2011 as part of beta releases.
The document summarizes a company's conversion of its embedded controller software from C to C++ over a two month period. It involved converting 8 projects with 30% shared code across 18 developers. Challenges included converting callbacks and dealing with scripting errors. Opportunities included improving code quality, team building, and evaluating new static analysis tools. The conversion was successful with minimal performance impacts and many bugs were found and fixed during the process. Future plans include C++ training and refactoring code to fully utilize C++ features.
Working with FME in an Agile Software Development LifecycleSafe Software
FME has often been demonstrated to accomplish a number of complex tasks on isolated Server and Desktop applications. However, is it easy to translate our experiences within our more comfortable GIS/Data Management environments, and work with FME in a multi-tiered, multi environment Agile software pipeline. Can FME be developed, tested, performance tested, version controlled and released iteration after iteration? Can we automate the deployment of the software? Can we automate the deployment of "code" (which in our case are workbenches)? Our presentation will demonstrate how we implemented FME as a successful contributor to a government digital exemplar within the UK. It is now live, operating 24/7 in secure hosted environments integrating with a multitude of different applications and databases. It has been configured to complete easy deployments with scripting, continuous integration tools and automation. We have made our ventures into implementing git-flow on our workbenches to allow agile team collaboration and code review whilst maintaining robust and resilient software.
Multi-Branched development with Git Source Code Managementdopejam
Developers work on feature branches off the mainline and merge their code into a QA branch designated by the release manager. Changes in the QA branch are tested and then merged into an integration branch for testing together. After integration testing passes, the build is deployed to the production environment. The development server and branches are obsolete after cutting over to the new process, but source control remains for reference.
1) The document provides steps for getting started with 99translations.com, which includes creating an account, creating a project, defining master translation files, uploading current translations, integrating version control, and inviting translators.
2) It describes setting up a project by providing basic project details and uploading initial translation files in the appropriate format.
3) 99translations.com aims to improve translation quality, find translators for additional languages, avoid errors, and create workflows for collaborating with translators.
- Operators are applications that extend Kubernetes to manage complex stateful applications. They use custom resource definitions (CRDs) to configure and automate tasks.
- Helm is a good starting point for creating operators as it is widely used and easy to learn. Operators created with Helm can later be used to manage resources in other operators.
- The demo showed creating a Helm operator from a Nginx chart and combining two operators with ArgoCD to deploy example apps based on custom resources.
OpenNfv Talk On Kubernetes and Network Function VirtualizationGlenn West
This document discusses application orchestration with Kubernetes. It covers packaging applications for deployment on Kubernetes, satisfying performance constraints, and how Kubernetes can provide services to make developing and managing cloud native applications easier. It also discusses moving applications from VMs to containers on Kubernetes, including decomposing monolithic applications and implementing a DevOps approach using CI/CD pipelines. Key concepts discussed include labels, persistent volumes, infrastructure as code, and maintaining separate test, development and production environments.
This document provides an overview of continuous integration (CI), continuous delivery (CD), and continuous deployment. CI involves regularly integrating code changes into a central repository and running automated tests. CD builds on CI by automatically preparing code changes for release to testing environments. Continuous deployment further automates the release of changes to production without human intervention if tests pass. The benefits of CI/CD include higher quality, lower costs, faster delivery, and happier teams. Popular CI tools include Jenkins, Bamboo, CircleCI, and Travis. Key practices involve automating all stages, keeping environments consistent, and making the pipeline fast. Challenges include requiring organizational changes and technical knowledge to automate the full process.
The document discusses a performance testing stack consisting of Gatling and Jenkins. Gatling is an open source load testing tool built on Scala that uses Akka and Netty frameworks. It is capable of supporting high loads without degradation. Jenkins is an open source automation server used to schedule and collect feedback from Gatling tests. The document also provides examples of charts produced by Gatling to analyze test results, including response times and number of active users over time.
This document provides guidance on integrating Jenkins with UFT by:
1. Deploying Jenkins and Tomcat on Windows, and configuring environment variables.
2. Installing the HP Application Automation Tools plugin in Jenkins to enable triggering UFT, QTP, ALM and other HP tests.
3. Configuring a Jenkins job to execute UFT test cases from the file system and archive results.
Code review automation and functional tests on CarrefourDenis Santos
Jenkins is used to automate the software development lifecycle including builds, tests, deployments, and more. A deployment pipeline promotes code through development, QA, and production environments running tests at each stage. Cucumber, Capybara, and Selenium are used for behavior-driven development and automated functional tests across different browsers and machines in parallel. SonarQube analyzes code quality after each build and will stop the pipeline if issues are found, reporting them to the development team.
Every organization wants to develop LabVIEW and TestStand applications better and faster. Learn how TI built a continuous delivery machine to accelerate overall software release cycles and deliver products in record time. Examine the concepts and tools used to deliver weekly software updates to a state-of-the-art framework developed in LabVIEW and TestStand. This resulted in a highly scalable sophisticated automated test platform that provides a uniform and robust method of semiconductor characterization to TI's validation community.
2016 CLA Summit - Branching Workflows for Team DevelopmentChing-Hwa Yu
Branching and merging has historically been a problem area for many development teams. With the evolution of version control systems, this has become easier to manage. However, as various branching strategies have emerged, each has it's own set of challenges. Through discussions with industry experts and my personal experience as a Product Owner, we'll take a look at a few of these strategies. We'll examine their benefits, their challenges, and how they relate to Continuous Integration.
This document discusses different stream strategies for software development using Rational Team Concert (RTC). It describes single stream development where all work is delivered to a single stream. It also covers multiple release development with dedicated development and maintenance streams. Multiple application development uses streams to segregate development for multiple software components. The document demonstrates adding a component to a workspace and delivering changes to different streams. It presents two use cases - one for a small team with multiple components and one for a large team. It concludes with asking for any questions or discussion.
Anthony Saieva and Gail Kaiser. Binary Quilting to Generate Patched Executables without Compilation. ACM Workshop on Forming an Ecosystem Around Software Transformation (FEAST), Virtual, November 2020, pp. 3-8. https://doi.org/10.1145/3411502.3418424
Using Perforce Streams to Optimize Development of Flash Memory SolutionsPerforce
Hear how SK Hynix, the world's second-largest memory chipmaker and the world's sixth-largest semiconductor company, uses Perforce Streams for globally distributed development of their Flash memory solutions.
The document outlines a branching and merging strategy with the following key elements:
1. It defines a branching model with a single master branch containing stable code and separate branches for sprints, features, fixes, and releases.
2. It establishes naming conventions for branches based on the sprint, major/minor release, and feature number. POM versions will also follow this convention.
3. It describes workflows for creating branches from the appropriate source branches and merging code with pull requests and testing.
Following on from the new company strategy, we will take a look into the priorities for the Perforce development team. Sharing the product roadmap for the next 12 months and recent updates made to make Helix continue to meet the demands of all our global customers.
Migrating IBM Cloud Orchestrator environment from v2.4.0.2 to v2.5.0.1Paulraj Pappaiah
The document outlines the steps to migrate an IBM Cloud Orchestrator environment from version 2.4.0.2 to 2.5.0.1. The key steps include: 1) checking prerequisites and discovering the topology, 2) migrating images between the systems, 3) exporting OpenStack data from the original system, 4) importing the data into the new IBM Cloud Manager, and 5) verifying the migrated resources on both systems and ensuring resources are no longer available on the original system.
This document discusses CI/CD pipelines and how StriderCD can be used to implement one. It begins with an overview of traditional code deployment processes versus CI/CD. CI/CD aims to automate testing and deployment through continuous integration and continuous deployment. The document then discusses why CI/CD is important for improving quality, keeping the build fast, and enabling visibility. It presents StriderCD as an open source CI/CD platform like Jenkins and Travis but easier to use. The rest of the document covers implementing a CI/CD pipeline with StriderCD, its features like integration with version control systems and notifications, and concludes that CI/CD is essential and StriderCD is a good option.
Using Redgate, AKS and Azure to bring DevOps to your DatabaseRed Gate Software
Join Hamish Watson and Rob Sewell to learn practical solutions on how to bring DevOps to your database, including:
• The importance of getting your database code into source control
• How to test your database changes
• Tools you can use to automate build and test processes
• How to build an automated deployment process for your database with Redgate tools
• How to embrace using Azure Kubernetes Services (AKS) in your deployment pipeline
• Deploying your entire pipeline as and when it is needed from Dev to Prod saving your organisation money
Database upgrades and data in general are often the most complicated part of your deployment process, so having a robust deployment path and checks before getting to production is very important.
The demos will showcase practical solutions that can help you and your team bring DevOps to your database using SQL Source Control, infrastructure as code, docker containers and SQL Change Automation – all leading up to a fully automated test and deployment process.
This will be a fun-filled fast paced hour and you will learn some new skills which will bring immediate benefit to your organization.
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT RoadmapHeadspring
Watch this presentation and download the slides at: http://headspring.com/nextgen
Continuous Delivery is helping streamline and automate the pipeline -- but research indicates it's no longer just about processes and tools. Organizational structures and skills need to change, too, bringing together developers, operations, QA and business stakeholders -- and facilitating this change is a new and special opportunity falling upon IT executive leadership.
In this Lunch & Learn presentation, our guest, Kurt Bittner, Forrester Research's principal analyst shares how organizations adopting this effective approach are achieving real business results. Following Kurt, Headspring's EVP of Operations, Glenn Burnside, walks through practical application best practices.
1) The document provides steps for getting started with 99translations.com, which includes creating an account, creating a project, defining master translation files, uploading current translations, integrating version control, and inviting translators.
2) It describes setting up a project by providing basic project details and uploading initial translation files in the appropriate format.
3) 99translations.com aims to improve translation quality, find translators for additional languages, avoid errors, and create workflows for collaborating with translators.
- Operators are applications that extend Kubernetes to manage complex stateful applications. They use custom resource definitions (CRDs) to configure and automate tasks.
- Helm is a good starting point for creating operators as it is widely used and easy to learn. Operators created with Helm can later be used to manage resources in other operators.
- The demo showed creating a Helm operator from a Nginx chart and combining two operators with ArgoCD to deploy example apps based on custom resources.
OpenNfv Talk On Kubernetes and Network Function VirtualizationGlenn West
This document discusses application orchestration with Kubernetes. It covers packaging applications for deployment on Kubernetes, satisfying performance constraints, and how Kubernetes can provide services to make developing and managing cloud native applications easier. It also discusses moving applications from VMs to containers on Kubernetes, including decomposing monolithic applications and implementing a DevOps approach using CI/CD pipelines. Key concepts discussed include labels, persistent volumes, infrastructure as code, and maintaining separate test, development and production environments.
This document provides an overview of continuous integration (CI), continuous delivery (CD), and continuous deployment. CI involves regularly integrating code changes into a central repository and running automated tests. CD builds on CI by automatically preparing code changes for release to testing environments. Continuous deployment further automates the release of changes to production without human intervention if tests pass. The benefits of CI/CD include higher quality, lower costs, faster delivery, and happier teams. Popular CI tools include Jenkins, Bamboo, CircleCI, and Travis. Key practices involve automating all stages, keeping environments consistent, and making the pipeline fast. Challenges include requiring organizational changes and technical knowledge to automate the full process.
The document discusses a performance testing stack consisting of Gatling and Jenkins. Gatling is an open source load testing tool built on Scala that uses Akka and Netty frameworks. It is capable of supporting high loads without degradation. Jenkins is an open source automation server used to schedule and collect feedback from Gatling tests. The document also provides examples of charts produced by Gatling to analyze test results, including response times and number of active users over time.
This document provides guidance on integrating Jenkins with UFT by:
1. Deploying Jenkins and Tomcat on Windows, and configuring environment variables.
2. Installing the HP Application Automation Tools plugin in Jenkins to enable triggering UFT, QTP, ALM and other HP tests.
3. Configuring a Jenkins job to execute UFT test cases from the file system and archive results.
Code review automation and functional tests on CarrefourDenis Santos
Jenkins is used to automate the software development lifecycle including builds, tests, deployments, and more. A deployment pipeline promotes code through development, QA, and production environments running tests at each stage. Cucumber, Capybara, and Selenium are used for behavior-driven development and automated functional tests across different browsers and machines in parallel. SonarQube analyzes code quality after each build and will stop the pipeline if issues are found, reporting them to the development team.
Every organization wants to develop LabVIEW and TestStand applications better and faster. Learn how TI built a continuous delivery machine to accelerate overall software release cycles and deliver products in record time. Examine the concepts and tools used to deliver weekly software updates to a state-of-the-art framework developed in LabVIEW and TestStand. This resulted in a highly scalable sophisticated automated test platform that provides a uniform and robust method of semiconductor characterization to TI's validation community.
2016 CLA Summit - Branching Workflows for Team DevelopmentChing-Hwa Yu
Branching and merging has historically been a problem area for many development teams. With the evolution of version control systems, this has become easier to manage. However, as various branching strategies have emerged, each has it's own set of challenges. Through discussions with industry experts and my personal experience as a Product Owner, we'll take a look at a few of these strategies. We'll examine their benefits, their challenges, and how they relate to Continuous Integration.
This document discusses different stream strategies for software development using Rational Team Concert (RTC). It describes single stream development where all work is delivered to a single stream. It also covers multiple release development with dedicated development and maintenance streams. Multiple application development uses streams to segregate development for multiple software components. The document demonstrates adding a component to a workspace and delivering changes to different streams. It presents two use cases - one for a small team with multiple components and one for a large team. It concludes with asking for any questions or discussion.
Anthony Saieva and Gail Kaiser. Binary Quilting to Generate Patched Executables without Compilation. ACM Workshop on Forming an Ecosystem Around Software Transformation (FEAST), Virtual, November 2020, pp. 3-8. https://doi.org/10.1145/3411502.3418424
Using Perforce Streams to Optimize Development of Flash Memory SolutionsPerforce
Hear how SK Hynix, the world's second-largest memory chipmaker and the world's sixth-largest semiconductor company, uses Perforce Streams for globally distributed development of their Flash memory solutions.
The document outlines a branching and merging strategy with the following key elements:
1. It defines a branching model with a single master branch containing stable code and separate branches for sprints, features, fixes, and releases.
2. It establishes naming conventions for branches based on the sprint, major/minor release, and feature number. POM versions will also follow this convention.
3. It describes workflows for creating branches from the appropriate source branches and merging code with pull requests and testing.
Following on from the new company strategy, we will take a look into the priorities for the Perforce development team. Sharing the product roadmap for the next 12 months and recent updates made to make Helix continue to meet the demands of all our global customers.
Migrating IBM Cloud Orchestrator environment from v2.4.0.2 to v2.5.0.1Paulraj Pappaiah
The document outlines the steps to migrate an IBM Cloud Orchestrator environment from version 2.4.0.2 to 2.5.0.1. The key steps include: 1) checking prerequisites and discovering the topology, 2) migrating images between the systems, 3) exporting OpenStack data from the original system, 4) importing the data into the new IBM Cloud Manager, and 5) verifying the migrated resources on both systems and ensuring resources are no longer available on the original system.
This document discusses CI/CD pipelines and how StriderCD can be used to implement one. It begins with an overview of traditional code deployment processes versus CI/CD. CI/CD aims to automate testing and deployment through continuous integration and continuous deployment. The document then discusses why CI/CD is important for improving quality, keeping the build fast, and enabling visibility. It presents StriderCD as an open source CI/CD platform like Jenkins and Travis but easier to use. The rest of the document covers implementing a CI/CD pipeline with StriderCD, its features like integration with version control systems and notifications, and concludes that CI/CD is essential and StriderCD is a good option.
Using Redgate, AKS and Azure to bring DevOps to your DatabaseRed Gate Software
Join Hamish Watson and Rob Sewell to learn practical solutions on how to bring DevOps to your database, including:
• The importance of getting your database code into source control
• How to test your database changes
• Tools you can use to automate build and test processes
• How to build an automated deployment process for your database with Redgate tools
• How to embrace using Azure Kubernetes Services (AKS) in your deployment pipeline
• Deploying your entire pipeline as and when it is needed from Dev to Prod saving your organisation money
Database upgrades and data in general are often the most complicated part of your deployment process, so having a robust deployment path and checks before getting to production is very important.
The demos will showcase practical solutions that can help you and your team bring DevOps to your database using SQL Source Control, infrastructure as code, docker containers and SQL Change Automation – all leading up to a fully automated test and deployment process.
This will be a fun-filled fast paced hour and you will learn some new skills which will bring immediate benefit to your organization.
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT RoadmapHeadspring
Watch this presentation and download the slides at: http://headspring.com/nextgen
Continuous Delivery is helping streamline and automate the pipeline -- but research indicates it's no longer just about processes and tools. Organizational structures and skills need to change, too, bringing together developers, operations, QA and business stakeholders -- and facilitating this change is a new and special opportunity falling upon IT executive leadership.
In this Lunch & Learn presentation, our guest, Kurt Bittner, Forrester Research's principal analyst shares how organizations adopting this effective approach are achieving real business results. Following Kurt, Headspring's EVP of Operations, Glenn Burnside, walks through practical application best practices.
The document discusses the SQALE method for evaluating source code quality. SQALE was developed by experts independent of any tool vendor to provide an objective, precise method that avoids issues like false positives. It promotes evaluating quality by measuring the remaining work needed to fix issues. SQALE provides a standardized model that can be tailored to different languages and criticality levels, and gives guidance on remediation priorities.
Manual application deployment processes tend to be error prone and inefficient and can make achieving consistent deployments seem impossible.
There is good news. You don’t need to choose between a careful, rigorous approach and a speedy but haphazard one. It’s possible to implement an automated deployment solution that provides consistency and audit trails while improving productivity for your release engineers, operations personnel, and testers. See how!
Learn more about UrbanCode: http://ibm.biz/learnurbancode
The document discusses IBM's UrbanCode products for application release automation and DevOps. It summarizes recent developments in UrbanCode Deploy and Release, including new capabilities for deploying containerized applications, managing WebSphere Application Server configurations, and integrating with additional systems of record. It also outlines key trends in application release automation for 2016 such as hybrid cloud deployments, containers, and cognitive capabilities. The document is intended to highlight capabilities of IBM's UrbanCode products and services for application delivery and DevOps.
Case Study: SunTrust’s Next Gen QA and Release Services Transformation JourneyCA Technologies
Sun Trust’s journey from challenge identification, charter definition, execution practices and key metrics and results in their transformation of traditional QA and Release functions into a more cohesive, collaborative and “continuous” model.
For more information, please visit http://cainc.to/Nv2VOe
Continuous Integration for Fun and Profitinovex GmbH
Agile Continuous Integration verspricht mithilfe von Pipelines die Entwicklung und Auslieferung von Software signifikant zu verbessern. Der Weg zur finalen Implementierung kann jedoch mit einigen unvorhergesehenen Aufwänden gepflastert sein. So werden wir uns einige hilfreiche Methoden und Tools zur Umsetzung solcher Pipelines mit dem Fokus auf Continuous Integration ansehen, um unseren agilen Entwicklungsprozess abzurunden und dadurch Zeit für die wichtigen Dinge im Alltag zu gewinnen.
Vorkenntnisse: Grundverständnis von Softwarentwicklung
Lernziele: Wir werden die Hintergründe von Continuous Integration/Delivery diskutieren und sehen uns ein Beispiel aus der Praxis an, das vor allem die Vorteile von CI/CD hevorheben soll.
Event: enterJS, 16.06.2016, Darmstadt
Speaker: Arnold Bechtoldt
Weitere Tech-Vorträge: https://www.inovex.de/de/content-pool/vortraege/
Continuous Integration to Shift Left Testing Across the Enterprise StackDevOps.com
With the move to agile DevOps, automated testing is a critical function to ensure high quality in continuous deployments.
In this session, learn how to start testing earlier and often to ensure quality in your codebase. Join Architect Suman Gopinath and Offering Manager Korinne Alpers to talk about shifting-left in the development cycle, starting with unit testing as a key aspect of continuous integration. You'll view a demo of the latest zUnit unit testing tooling for CICS Db2 applications, as well as hear best practices and tales from the testing trenches.
DevOps aims to bring development and operations teams closer together through automation, shared tools and processes. Automating builds improves consistency, reduces errors and improves productivity. Common issues with builds include them being too long, handling a large volume, or being too complex. Solutions include improving build speed, addressing long/complex builds through techniques like distributed builds, and using build acceleration tools. Automation is a key part of DevOps and enables continuous integration, testing and deployment.
This document discusses continuous integration for System z mainframe applications. It begins with an overview of DevOps and continuous integration concepts. It then discusses the IBM DevOps solution and challenges of applying DevOps to System z environments. The document focuses on how continuous integration can be implemented for System z to provide rapid feedback, automated testing in isolated environments, and higher quality code promoted between stages. It also discusses how continuous testing can be achieved through dependency virtualization to improve testing efficiency.
The presentation about the fundamentals of DevOps workflow and CI/CD practices I presented at Centroida (https://centroida.ai/) as a back-end development intern.
Arun Prasad is a Test Engineer with over 3.5 years of experience in software testing. He has experience in manual testing, automation using shell scripts and Python, and security testing using Linux distributions like Backtrack and Kali. He has worked on projects for clients like Lenovo, American Megatrends, and Fujitsu, focusing on testing firmware, IPMI functionality, and management software. His roles have included test case development, automation, defect logging, and ensuring tasks are completed on time.
Back-end testing is an unfamiliar testing area to many testers, especially when the Back-end adopts web services technologies and has gigabytes of data need to be verified. The presentation will outlines numbers of testing activities that need to be done to deal with challenges.
Services/Domain Testing Introduction:
We have been providing automation test service for Back-end system which has web services, web application technologies and meta-data processing. The domain we has worked on is about Communication Media and Entertainment.
Challenges:
Complex business logic inside layer of data storage and processing to provide services. Different platforms under test.
Defragmented testing result so it is difficult to make decision.
Must align testing with development life cycle.
Solutions:
Apply automation testing to Continuous Integration.
Design automation test framework to deal with Shell, Web Service, Web Application, gigabytes of XML Data on Windows and Linux.
Select proper technology stack to centralize the testing result from both manual and automation teams.
Jenkins is continuous integration and continuous delivery application, as start point, run its job to build source code from development team. When unit testing for source code is passed, automated system test written by LISA is launched as flow controller for automation test framework.
The LISA’s core functionalities are to verify middleware layer, web services based on SOAP/RESTful and database. Extending LISA’ capabilities are also applied in practice to test different technologies under test such as web application by integrating with Selenium, Shell Script by JCraft and processing big data file by Xstream/JAXB.
This document is a resume for Raushan Kumar, who has over 6 years of experience working with Documentum applications including development, support, and maintenance. He has skills in technologies like Documentum, Java, XML, and Linux/Unix administration. Currently he works as an Advanced ECM Developer for NNIT in Prague, Czech Republic. Previous roles include Worldwide Technical Engineer for EMC and several projects for Novartis as a Senior Project Engineer and Project Engineer. He has experience customizing applications, installing and configuring servers, writing scripts, and following GxP compliance standards.
Orchestrate Your End-to-end Mainframe Application Release PipelineDevOps.com
What steel and concrete are to a skyscraper, the mainframe is to the global economy. The mainframe is the transactional backbone for 96 of the world’s top 100 banks, 23 of the 25 top US retailers and 9 out of 10 of the world’s largest insurance companies.
When you think of a mainframe, you probably think of an old green computer screen. Did you know you can use the same modern tools and techniques with mainframes that you use with cloud and mobile?
With the growth of mission-critical mainframe workloads showing no signs of slowing down, application delivery cannot remain slow and complex. Organizations must apply the same DevOps processes to the mainframe as they do with other platforms.
Compuware and XebiaLabs enable large enterprises to automatically build, test and deploy mainframe releases within a cross-platform application release pipeline.
Simple architecture principles expressed in twelve "factors" can prepare the application for simple deployment into diverse environments, infrastructures, platforms.
This document provides an overview of VMware's journey to the third platform and cloud native applications. It discusses how the third platform is disrupting businesses through faster technology adoption rates. It outlines how developers can use tools like Docker, Kubernetes, and VMware's products like vSphere Integrated Containers, Photon Platform, and vRealize CodeStream to develop and deploy cloud native applications. The document emphasizes that automating processes through DevOps and continuous integration/delivery is necessary for businesses to adapt and avoid disruption in today's environment. It argues VMware's products provide a path for developing and operating cloud native applications while leveraging existing VMware investments in virtualization.
This document discusses Bakson's efforts to implement continuous integration, delivery, and deployment practices for Ticketmaster's API team. It outlines the tools used such as Gitlab, Jenkins, SonarQube, Nexus, Rundeck, and Gatling. Automation is triggered upon code commits to run tests and deploy to environments. Testing occurs for each microservice rather than all services at once. This allows faster feedback loops while deploying features. The goal is to deploy to production continuously while ensuring quality and stability.
This document provides a summary of Birendra Kumar's career objective, work experience, skills, and projects. He has over 7 years of experience as a senior software engineer working with multicast protocols, routing protocols, operating systems like AIX and Integrity RTOS. Some of his key projects include implementing PIM passive mode for multicast routing and developing logging and debugging infrastructure. He is proficient in C, C++, shell scripting, and has worked on tools like Git, Visual Studio, and ClearCase.
Orchestrate Your End-to-end Mainframe Application Release PipelineDevOps.com
What steel and concrete are to a skyscraper, the mainframe is to the global economy. The mainframe is the transactional backbone for 96 of the world’s top 100 banks, 23 of the 25 top US retailers and 9 out of 10 of the world’s largest insurance companies.
When you think of a mainframe, you probably think of an old green computer screen. Did you know you can use the same modern tools and techniques with mainframes that you use with cloud and mobile?
With the growth of mission-critical mainframe workloads showing no signs of slowing down, application delivery cannot remain slow and complex. Organizations must apply the same DevOps processes to the mainframe as they do with other platforms.
Compuware and XebiaLabs enable large enterprises to automatically build, test and deploy mainframe releases within a cross-platform application release pipeline.
Birendra Kumar has over 8 years of experience as a senior software engineer and developer. He has extensive experience working with protocols like PIM, multicast, IGMP, and unicast routing. Some of his responsibilities have included developing features, resolving defects, and writing test cases. He is proficient in languages like C, C++, and shell scripting. Birendra holds a Bachelor's degree in Computer Science and has received several achievements and appreciations for his work.
Fred McLain has over 15 years of experience as a software engineer and technical lead. He currently works at General Dynamics developing software for NASA's satellite communications systems. Previously he has worked on aircraft structural analysis tools at Boeing and developed open source accessibility tools for blind developers. He has extensive experience with Java, REST, distributed systems, and Agile development practices.
This document provides information on Jenkins, including:
- Jenkins is an open source automation tool that allows continuous integration and delivery of software projects. It builds, tests, and prepares code changes for release.
- Key benefits of Jenkins include speeding up the software development process through automation, integrating with many testing and deployment technologies, and making it easier for developers to integrate changes and users to obtain fresh builds.
- Jenkins uses plugins to integrate various DevOps stages like build, test, package, deploy, etc. It supports pipelines to automate development tasks.
Similar to A Next-Gen Continuous Integration Solution to Improve Software Delivery (20)
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
12. 〉 Git-Pushes go to feature/bugfix branches
〉 Every Git-Push triggers a test
〉 Tests run in prod-like environments
〉 Tests run in isolated/dedicated environments
〉 Automate (almost) everything
〉 Increase & maintain (infra) test coverage
The Plan
12
17. 〉 Don‘t underestimate the effort for CI/CD preparation
〉 Isolated integration testing at ludicrous speed
〉 Infrastructure as Code improves documentation
〉 Similarity to production leads to faster bugfixing
〉 Parallel testing increases work efficiency
Conclusions
17