DevOps is a software development approach that emphasizes collaboration between development and operations teams throughout the development lifecycle. Central to DevOps is continuous delivery, which involves frequent software releases through an automated testing pipeline. This pipeline incorporates various types of testing at different stages to catch issues early. Automated deployment is key to continuous delivery, allowing for more testing opportunities like automated functional and security testing. Implementing practices like continuous integration, unit testing, code coverage, mutation testing, static analysis, and automated deployment verification can improve software quality by enabling more testing and fearless refactoring.
Building security into software is harder than it should be. This article explores a way to align application security practices
with other software development best practices in order to make building security in easier to manage and more cost effective.
In particular, this article looks at combining continuous integration (CI) with security testing and secure static code analysis.
10 Things You Might Not Know: Continuous IntegrationCoveros, Inc.
The name says it all. Continuous integration (CI) is the process of continually integrating your software to assure that any software issues are eliminated as early as possible during software development. Effective CI heavily leverages automation.
Compatibility Testing of Your Web Apps - Tips and Tricks for Debugging Locall...Sauce Labs
Test automation is all about running the most tests in the least amount of time. This is great for mature apps, but in the early stages of developing your web or mobile app, developers need to run a number of tests to ensure the app runs at all. Further complicating the issue is that often, your app is architect-ed differently for web and mobile which makes writing automated tests tricky.
Test Automation Specialist Max Saperstone from Coveros will cover some simple testing examples and demonstrate how to expand these for testing over multiple web architectures. He will briefly cover the difference in the design of these sites with a focus on how tests can be designed to overcome their limitations, minimizing duplicate code, and following best practices.
Continuous Delivery in a Legacy Shop - One Step at a TimeGene Gotimer
Not every continuous delivery (CD) initiative starts with someone saying “Drop everything. We’re going to do DevOps.” Sometimes, you have to grow your process incrementally. And sometimes you don’t set out to grow at all—you are just fixing problems with your process, trying to make things better. Gene Gotimer discusses techniques and the chain of tools he has used to bring a DevOps mindset and CD practices into a legacy environment. Gene discusses how his team started fixing problems and making process improvements in development. From there, they tackled one problem after another, each time making the release a little better and a little less risky. They incrementally brought their practices through other environments until the project was confidently delivering working and tested releases every two weeks. Gene shares their journey and the tools they used to build quality into the product, the releases, and the release process.
Why do we need to have software testing happen in a continuous manner? This deck explains the importance of Continuous Integration and a case study of 24x7 Testing.
How to go from waterfall app dev to secure agile development in 2 weeks Ulf Mattsson
Waterfall is based on the concept of sequential software development—from conception to ongoing maintenance—where each of the many steps flowed logically into the next.
Join this webinar presentation to learn:
- Why DevOps cannot effectively work in waterfall
- How to use DevOps tools to optimize processes in either development or operations through automation
We will also discuss what is needed to support full DevOps
Testing in a Continuous Delivery Pipeline - Better, Faster, CheaperGene Gotimer
The continuous delivery pipeline is the process of taking new or changed features from developers, and getting features deployed into production and delivered quickly to the customer. Gene Gotimer says testing within continuous delivery pipelines should be designed so the earliest tests are the quickest and easiest to run, giving developers the fastest feedback. Successive rounds of testing lead to increased confidence that the code is a viable candidate for production and that more expensive tests—time, effort, cost—are justified. Manual testing is performed toward the end of the pipeline, leaving computers to do as much work as possible before people get involved. Although it is tempting to arrange the delivery pipeline in phases (e.g., functional tests, then acceptance tests, then load and performance tests, then security tests), this can lead to serious problems progressing far down the pipeline before they are caught. Gene shows how to arrange your tests so each round provides just enough testing to give you confidence that the next set of tests is worth the investment. He explores how to get the right types of testing into your pipeline at the right points.
Building security into software is harder than it should be. This article explores a way to align application security practices
with other software development best practices in order to make building security in easier to manage and more cost effective.
In particular, this article looks at combining continuous integration (CI) with security testing and secure static code analysis.
10 Things You Might Not Know: Continuous IntegrationCoveros, Inc.
The name says it all. Continuous integration (CI) is the process of continually integrating your software to assure that any software issues are eliminated as early as possible during software development. Effective CI heavily leverages automation.
Compatibility Testing of Your Web Apps - Tips and Tricks for Debugging Locall...Sauce Labs
Test automation is all about running the most tests in the least amount of time. This is great for mature apps, but in the early stages of developing your web or mobile app, developers need to run a number of tests to ensure the app runs at all. Further complicating the issue is that often, your app is architect-ed differently for web and mobile which makes writing automated tests tricky.
Test Automation Specialist Max Saperstone from Coveros will cover some simple testing examples and demonstrate how to expand these for testing over multiple web architectures. He will briefly cover the difference in the design of these sites with a focus on how tests can be designed to overcome their limitations, minimizing duplicate code, and following best practices.
Continuous Delivery in a Legacy Shop - One Step at a TimeGene Gotimer
Not every continuous delivery (CD) initiative starts with someone saying “Drop everything. We’re going to do DevOps.” Sometimes, you have to grow your process incrementally. And sometimes you don’t set out to grow at all—you are just fixing problems with your process, trying to make things better. Gene Gotimer discusses techniques and the chain of tools he has used to bring a DevOps mindset and CD practices into a legacy environment. Gene discusses how his team started fixing problems and making process improvements in development. From there, they tackled one problem after another, each time making the release a little better and a little less risky. They incrementally brought their practices through other environments until the project was confidently delivering working and tested releases every two weeks. Gene shares their journey and the tools they used to build quality into the product, the releases, and the release process.
Why do we need to have software testing happen in a continuous manner? This deck explains the importance of Continuous Integration and a case study of 24x7 Testing.
How to go from waterfall app dev to secure agile development in 2 weeks Ulf Mattsson
Waterfall is based on the concept of sequential software development—from conception to ongoing maintenance—where each of the many steps flowed logically into the next.
Join this webinar presentation to learn:
- Why DevOps cannot effectively work in waterfall
- How to use DevOps tools to optimize processes in either development or operations through automation
We will also discuss what is needed to support full DevOps
Testing in a Continuous Delivery Pipeline - Better, Faster, CheaperGene Gotimer
The continuous delivery pipeline is the process of taking new or changed features from developers, and getting features deployed into production and delivered quickly to the customer. Gene Gotimer says testing within continuous delivery pipelines should be designed so the earliest tests are the quickest and easiest to run, giving developers the fastest feedback. Successive rounds of testing lead to increased confidence that the code is a viable candidate for production and that more expensive tests—time, effort, cost—are justified. Manual testing is performed toward the end of the pipeline, leaving computers to do as much work as possible before people get involved. Although it is tempting to arrange the delivery pipeline in phases (e.g., functional tests, then acceptance tests, then load and performance tests, then security tests), this can lead to serious problems progressing far down the pipeline before they are caught. Gene shows how to arrange your tests so each round provides just enough testing to give you confidence that the next set of tests is worth the investment. He explores how to get the right types of testing into your pipeline at the right points.
Shifting the conversation from active interception to proactive neutralization Rogue Wave Software
When did we forget that old saying, “prevention is the best medicine”, when it comes to cybersecurity? The current focus on mitigating real-time attacks and creating stronger defensive networks has overshadowed the many ways to prevent attacks right at the source – where security management has the biggest impact. Source code is where it all begins and where attack mitigation is the most effective.
In this webinar we’ll discuss methods of proactive threat assessment and mitigation that organizations use to advance cybersecurity goals today. From using static analysis to detect vulnerabilities as early as possible, to managing supply chain security through standards compliance, to scanning for and understanding potential risks in open source, these methods shift attack mitigation efforts left to simplify fixes and enable more cost-effective solutions.
Webinar recording: http://www.roguewave.com/events/on-demand-webinars/shifting-the-conversation-from-active-interception
Presented at STPCon 2016. With the extensive amount of testing performed nightly on large software projects, test and verification teams often experience lengthy wait times for the availability of test results of the latest build. As we strive to identify and resolve issues as fast as possible, alternative methods of test execution have to be found. Learn how to use Jenkins to launch tests in parallel across a number of Virtual Machines, monitor execution health, and process results. Learn about various Jenkins plugins and how they contributed to the solution. Learn how to trigger downstream jobs, even if they are on separate Jenkins instances.
Security is tough and is even tougher to do, in complex environments with lots of dependencies and monolithic architecture. With emergence of Microservice architecture, security has become a bit easier however it introduces its own set of security challenges. This talk will showcase how we can leverage DevSecOps techniques to secure APIs/Microservices using free and open source software. We will also discuss how emerging technologies like Docker, Kubernetes, Clair, ansible, consul, vault, etc., can be used to scale/strengthen the security program for free.
More details here - https://www.practical-devsecops.com/
DevOps Continuous Integration & Delivery - A Whitepaper by RapidValueRapidValue
In this whitepaper, we will deep dive into the concept of continuous integration, continuous delivery and continuous deployment and explain how businesses can benefit from this. We will also elucidate on how to build an effective CI/CD pipeline and some of the best practices for your enterprise DevOps journey.
Rapid software testing and conformance with static code analysisRogue Wave Software
With growing connectivity between complex automotive software components, development teams are looking for new ways to verify code security and validate against standards. This explains an exciting new approach to software testing that combines the breadth and depth of static analysis with modern test automation to provide rapid feedback to developers on incremental code changes – continuous static code analysis. By connecting deep analysis to continuous integration workflows, testing is pulled forward earlier to eliminate defects and reduce rework costs.
Walk away with knowledge of real defects, security vulnerabilities, and automotive standards (such as MISRA and ISO 26262) plus key steps to start immediate deployment of continuous static code analysis for testing. Presented at GENIVI All Member Meeting & Open Community Days.
Better Security Testing: Using the Cloud and Continuous DeliveryGene Gotimer
Even though many organizations claim that security is a priority, that claim doesn’t always translate into supporting security initiatives in software development or test. Security code reviews often are overlooked or avoided, and when development schedules fall behind, security testing may be dropped to help the team “catch up.” Everyone wants more secure development; they just don’t want to spend time or money to get it. Gene Gotimer describes his experiences with implementing a continuous delivery process in the cloud and how he integrated security testing into that process. Gene discusses how to take advantage of the automated provisioning and automated deploys already being implemented to give more opportunities along the way for security testing without schedule disruption. Learn how you can incrementally mature a practice to build security into the process—without a large-scale, time-consuming, or costly effort.
From the DevOps Lisbon Meetup @Beta-i (May 2016)
Presenter: Filipe Bartolomeu
Title: What I've Learned About Test Automation and DevOps
Agile/DevOps methodologies amplified the deployment frequency one order of magnitude. That wouldn’t be achievable without quality assurance. That’s where Test Automation (TA) comes into place. A presentation about pitfalls, techniques and current trends of Test Automation.
IoT Software Testing Challenges: The IoT World Is Really DifferentTechWell
With billions of devices containing new software connected to the Internet, the Internet of Things (IoT) is poised to become the next growth area for software development and testing. Although many traditional test techniques and strategies remain viable, challenges in IoT testing include huge amounts of data, multiple communication channels, device protocols, resource limitations (battery or memory), addressing sensors and controllers, cloud-hardware-device integration, and security concerns. Jon Hagar says that for IoT testers to be successful, they must develop new knowledge and skills, and apply them based on real data and proven test design methods. Testing analytics should include raw test data, data relationships across software integration boundaries, and social media inputs, as well as a keen understanding of sociological and psychological factors. Jon shares insights into math-based testing, model-based testing, attack-based exploratory testing, and appropriate types of standards as basics of IoT testing. Take back a new holistic view for your IoT testing which considers the world environment, connected systems, local systems, and the IoT device itself.
An introduction to the concepts behind Continuous Delivery as well as an introduction to some of the tools available for implementing continuous delivery practices on a new project. This presentation is geared towards Java developers, but is applicable to all.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
Shifting the conversation from active interception to proactive neutralization Rogue Wave Software
When did we forget that old saying, “prevention is the best medicine”, when it comes to cybersecurity? The current focus on mitigating real-time attacks and creating stronger defensive networks has overshadowed the many ways to prevent attacks right at the source – where security management has the biggest impact. Source code is where it all begins and where attack mitigation is the most effective.
In this webinar we’ll discuss methods of proactive threat assessment and mitigation that organizations use to advance cybersecurity goals today. From using static analysis to detect vulnerabilities as early as possible, to managing supply chain security through standards compliance, to scanning for and understanding potential risks in open source, these methods shift attack mitigation efforts left to simplify fixes and enable more cost-effective solutions.
Webinar recording: http://www.roguewave.com/events/on-demand-webinars/shifting-the-conversation-from-active-interception
Presented at STPCon 2016. With the extensive amount of testing performed nightly on large software projects, test and verification teams often experience lengthy wait times for the availability of test results of the latest build. As we strive to identify and resolve issues as fast as possible, alternative methods of test execution have to be found. Learn how to use Jenkins to launch tests in parallel across a number of Virtual Machines, monitor execution health, and process results. Learn about various Jenkins plugins and how they contributed to the solution. Learn how to trigger downstream jobs, even if they are on separate Jenkins instances.
Security is tough and is even tougher to do, in complex environments with lots of dependencies and monolithic architecture. With emergence of Microservice architecture, security has become a bit easier however it introduces its own set of security challenges. This talk will showcase how we can leverage DevSecOps techniques to secure APIs/Microservices using free and open source software. We will also discuss how emerging technologies like Docker, Kubernetes, Clair, ansible, consul, vault, etc., can be used to scale/strengthen the security program for free.
More details here - https://www.practical-devsecops.com/
DevOps Continuous Integration & Delivery - A Whitepaper by RapidValueRapidValue
In this whitepaper, we will deep dive into the concept of continuous integration, continuous delivery and continuous deployment and explain how businesses can benefit from this. We will also elucidate on how to build an effective CI/CD pipeline and some of the best practices for your enterprise DevOps journey.
Rapid software testing and conformance with static code analysisRogue Wave Software
With growing connectivity between complex automotive software components, development teams are looking for new ways to verify code security and validate against standards. This explains an exciting new approach to software testing that combines the breadth and depth of static analysis with modern test automation to provide rapid feedback to developers on incremental code changes – continuous static code analysis. By connecting deep analysis to continuous integration workflows, testing is pulled forward earlier to eliminate defects and reduce rework costs.
Walk away with knowledge of real defects, security vulnerabilities, and automotive standards (such as MISRA and ISO 26262) plus key steps to start immediate deployment of continuous static code analysis for testing. Presented at GENIVI All Member Meeting & Open Community Days.
Better Security Testing: Using the Cloud and Continuous DeliveryGene Gotimer
Even though many organizations claim that security is a priority, that claim doesn’t always translate into supporting security initiatives in software development or test. Security code reviews often are overlooked or avoided, and when development schedules fall behind, security testing may be dropped to help the team “catch up.” Everyone wants more secure development; they just don’t want to spend time or money to get it. Gene Gotimer describes his experiences with implementing a continuous delivery process in the cloud and how he integrated security testing into that process. Gene discusses how to take advantage of the automated provisioning and automated deploys already being implemented to give more opportunities along the way for security testing without schedule disruption. Learn how you can incrementally mature a practice to build security into the process—without a large-scale, time-consuming, or costly effort.
From the DevOps Lisbon Meetup @Beta-i (May 2016)
Presenter: Filipe Bartolomeu
Title: What I've Learned About Test Automation and DevOps
Agile/DevOps methodologies amplified the deployment frequency one order of magnitude. That wouldn’t be achievable without quality assurance. That’s where Test Automation (TA) comes into place. A presentation about pitfalls, techniques and current trends of Test Automation.
IoT Software Testing Challenges: The IoT World Is Really DifferentTechWell
With billions of devices containing new software connected to the Internet, the Internet of Things (IoT) is poised to become the next growth area for software development and testing. Although many traditional test techniques and strategies remain viable, challenges in IoT testing include huge amounts of data, multiple communication channels, device protocols, resource limitations (battery or memory), addressing sensors and controllers, cloud-hardware-device integration, and security concerns. Jon Hagar says that for IoT testers to be successful, they must develop new knowledge and skills, and apply them based on real data and proven test design methods. Testing analytics should include raw test data, data relationships across software integration boundaries, and social media inputs, as well as a keen understanding of sociological and psychological factors. Jon shares insights into math-based testing, model-based testing, attack-based exploratory testing, and appropriate types of standards as basics of IoT testing. Take back a new holistic view for your IoT testing which considers the world environment, connected systems, local systems, and the IoT device itself.
An introduction to the concepts behind Continuous Delivery as well as an introduction to some of the tools available for implementing continuous delivery practices on a new project. This presentation is geared towards Java developers, but is applicable to all.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
What is Unit Testing? - A Comprehensive Guideflufftailshop
Software development involves different steps and processes, ranging from writing code and testing every function to debugging and deploying. Unit testing is an important test method used by QA teams to ensure that a software product is free of errors and meets all essential requirements.
https://go-dgtl.com/whitepaper/devops-explained-best-practices/?utm_source=offpage&utm_medium=thirdparty&utm_campaign=alo-seo - DevOps is one of the best solutions that come into the role here. It helps bring together a company’s software development and IT operations teams, promoting collaboration and enhancing relationships
Analysis and Design of Algorithms (ADA): An In-depth Exploration
Introduction:
The field of computer science is heavily reliant on algorithms to solve complex problems efficiently. The analysis and design of algorithms (ADA) is a fundamental area of study that focuses on understanding and creating efficient algorithms. This comprehensive overview will delve into the various aspects of ADA, including its importance, key concepts, techniques, and applications.
Importance of ADA:
Efficient algorithms play a critical role in various domains, including software development, data analysis, artificial intelligence, and optimization. ADA provides the tools and techniques necessary to design algorithms that are both correct and efficient. By analyzing the performance characteristics of algorithms, ADA enables computer scientists and engineers to develop solutions that save time, resources, and computational power.
Key Concepts in ADA:
Correctness: ADA emphasizes the importance of designing algorithms that produce correct outputs for all possible inputs. Techniques like mathematical proofs and induction are used to establish the correctness of algorithms.
Complexity Analysis: ADA seeks to analyze the efficiency of algorithms by examining their time and space complexity. Time complexity measures the amount of time required by an algorithm to execute, while space complexity measures the amount of memory consumed.
Asymptotic Notations: ADA employs asymptotic notations, such as Big O, Omega, and Theta, to express the growth rates of functions and classify the efficiency of algorithms. These notations allow for a concise comparison of algorithmic performance.
Algorithm Design Paradigms: ADA explores various design paradigms, including divide and conquer, dynamic programming, greedy algorithms, and backtracking. Each paradigm offers a systematic approach to solving problems efficiently.
Techniques in ADA:
Divide and Conquer: This technique involves breaking down a problem into smaller subproblems, solving them independently, and combining the solutions to obtain the final result. Well-known algorithms like Merge Sort and Quick Sort utilize the divide and conquer approach.
Dynamic Programming: Dynamic programming breaks down a complex problem into a series of overlapping subproblems and solves them in a bottom-up manner. This technique optimizes efficiency by storing and reusing intermediate results. The Fibonacci sequence calculation is a classic example of dynamic programming.
Greedy Algorithms: Greedy algorithms make locally optimal choices at each step, with the hope of achieving a global optimal solution. These algorithms are efficient but may not always yield the best overall solution. The Huffman coding algorithm for data compression is a widely used example of a greedy algorithm.
Backtracking: Backtracking involves searching for a solution to a problem by incrementally building a solution and undoing the choices that lead to dead-ends.
Software testing services in India .pptxSakshiPatel82
Our Software testing services in India are typically provided by specialized testing companies, independent testing teams, or integrated within software development companies. These services help businesses mitigate risks, enhance software quality, accelerate time-to-market, and ultimately improve customer satisfaction by delivering reliable and robust software solutions. Visit https://www.vtestcorp.com/ for details.
To assist in overcoming this obstacle, DevOps Solutions kicks in. This blog post will offer an in-depth overview of the DevOps lifecycle, including the stages involved. So, whether you are new to DevOps or looking to enhance your existing knowledge, this Ecosmob guide will provide you with a complete understanding of the DevOps lifecycle.
Exploring the Phases of DevOps Lifecycle: Case Studies and ToolsSofiaCarter4
This guide delves into the different phases of the DevOps lifecycle, examining the best practices, tools and technologies involved in each stage. Through the use of case studies, readers gain insights into how DevOps practices have been successfully implemented in real-world scenarios, providing actionable knowledge that can be applied to their own projects.
Which Development Metrics Should I Watch?Coveros, Inc.
W. Edwards Deming noted that “people with targets and jobs dependent upon meeting them will probably meet the targets – even if they have to destroy the enterprise to do it.” While metrics can be a great tool for evaluating performance and software quality, becoming beholden to reaching metrics goals, especially the wrong ones, can be detrimental to the project. Each team needs to take care and understand what targets are appropriate for their project. They also need to consider the current and desired states of the source code and product and the capabilities and constraints of the team.
As one of the lead architects working with a huge codebase on a government project, I often have the opportunity to influence the teams around me into watching or ignoring various metrics. I will walk through some measures that are available to most projects and discuss what they really mean, various misconceptions about their meaning, the tools that can be used to collect them, and how you can use them to help your team. I’ll discuss experiences and lessons learned (often the hard way) about using the wrong metrics and the damage they can do.
This session is aimed at development leads and others that are trying to choose the right metrics to measure or trying to influence what metrics to avoid.
System Event Monitoring for Active AuthenticationCoveros, Inc.
The authors use system event monitoring to distinguish between the behavioral characteristics of normal and anomalous computer system users. Identifying anomalous behavior at the system event level diminishes privacy concerns and supports the identification of cross-application behavioral patterns.
Better Security Testing: Using the Cloud and Continuous Delivery Coveros, Inc.
Even though many organizations claim that security is a priority, that claim doesn’t always translate into supporting security initiatives in software development or test. Security code reviews often are overlooked or avoided, and when development schedules fall behind, security testing may be dropped to help the team “catch up.” Everyone wants more secure development; they just don’t want to spend time or money to get it. Gene Gotimer describes his experiences with implementing a continuous delivery process in the cloud and how he integrated security testing into that process. Gene discusses how to take advantage of the automated provisioning and automated deploys already being implemented to give more opportunities along the way for security testing without schedule disruption. Learn how you can incrementally mature a practice to build security into the process—without a large-scale, time-consuming, or costly effort.
Create Disposable Test Environments with Vagrant and Puppet Coveros, Inc.
As the pace of development increases, testing has more to do and less time in which to do it. Software testing must evolve to meet delivery goals while continuing to meet quality objectives. Gene Gotimer explores how tools like Vagrant and Puppet work together to provide on-demand, disposable test environments that are delivered quickly, in a known state, with pre-populated test data and automated test fixture provisioning. With a single command, Vagrant provisions one or more virtual machines on a local box, in a private or public cloud. Puppet then takes over to install and configure software, setup test data, and get the system or systems ready for testing. Since the process is automated, anyone on the team can use the same Vagrant and Puppet scripts to get his own virtual environment for testing. When you are finished with it, Vagrant tears it back down and restores it to the same original state.
Continuous Delivery in a Legacy Shop - One Step at a Time Coveros, Inc.
Not every continuous delivery (CD) initiative starts with someone saying “Drop everything. We’re going to do DevOps.” Sometimes, you have to grow your process incrementally. And sometimes you don’t set out to grow at all—you are just fixing problems with your process, trying to make things better. Gene Gotimer discusses techniques and the chain of tools he has used to bring a DevOps mindset and CD practices into a legacy environment. Gene discusses how his team started fixing problems and making process improvements in development. From there, they tackled one problem after another, each time making the release a little better and a little less risky. They incrementally brought their practices through other environments until the project was confidently delivering working and tested releases every two weeks. Gene shares their journey and the tools they used to build quality into the product, the releases, and the release process.
DevOps in a Regulated and Embedded Environment (AgileDC) Coveros, Inc.
Embedded environments greatly restrict the tools available for a DevOps pipeline. A regulated environment changes the processes a development team can use to deliver software. The combination results in a highly restricted environment that forces the team back to first principles, finding what can actually work. In this talk, we'll consider the options, develop a set of helpful tools and discuss the challenges facing any team working on DevOps in unfavorable environments.
Together, we'll examine my experiences with a medical device company, where I built a DevOps pipeline for software controlling a heart pump. I would like to discuss the tools that worked as well as the principles that lead our team to success.
Compatibility Testing of Your Web Apps - Tips and Tricks for Debugging Locall...Coveros, Inc.
Test automation is all about running the most tests in the least amount of time. This is great for mature apps, but in the early stages of developing your web or mobile app, developers need to run a number of tests to ensure the app runs at all. Further complicating the issue is that often, your app is architect-ed differently for web and mobile which makes writing automated tests tricky.
Test Automation Specialist Max Saperstone from Coveros will cover some simple testing examples and demonstrate how to expand these for testing over multiple web architectures. He will briefly cover the difference in the design of these sites with a focus on how tests can be designed to overcome their limitations, minimizing duplicate code, and following best practices.
Developing a delivery pipeline means more than just adding automated deploys to the development cycle. To be successful, tests of all types must be incorporated throughout the process in order to be sure that problems aren’t slipping through. Most pipelines include unit tests, functional tests, and acceptance tests, but those aren’t always enough. I’ll present some types of testing you might not have considered, or at least might not have considered the importance of. Some types will address code quality, others code security, and some the health and security of the pipeline itself.
This talk is aimed at people that are trying to build confidence in their continuous delivery pipeline. I’ll talk about specific tools we used to supplement our pipeline testing. I won’t get into how to use each tool-- this is more of a series of teasers to encourage people to look into the tools, and even letting them know what types of tools and testing opportunities are out there.
Testing in a Continuous Delivery Pipeline - Better, Faster, Cheaper Coveros, Inc.
The continuous delivery pipeline is the process of taking new or changed features from developers, and getting features deployed into production and delivered quickly to the customer. Gene Gotimer says testing within continuous delivery pipelines should be designed so the earliest tests are the quickest and easiest to run, giving developers the fastest feedback. Successive rounds of testing lead to increased confidence that the code is a viable candidate for production and that more expensive tests—time, effort, cost—are justified. Manual testing is performed toward the end of the pipeline, leaving computers to do as much work as possible before people get involved. Although it is tempting to arrange the delivery pipeline in phases (e.g., functional tests, then acceptance tests, then load and performance tests, then security tests), this can lead to serious problems progressing far down the pipeline before they are caught. Gene shows how to arrange your tests so each round provides just enough testing to give you confidence that the next set of tests is worth the investment. He explores how to get the right types of testing into your pipeline at the right points.
Web Application Security Testing: Kali Linux Is the Way to Go Coveros, Inc.
Many free security testing tools are available, but finding ones that meet your needs and work in your environment can involve substantial time and effort. Especially when you are just starting out with security testing, finding reputable tools that do what you need is not easy. And installing them correctly just to evaluate them can be prohibitively time consuming. Kali Linux is a free Linux distribution with hundreds of security testing and auditing tools installed. Gene Gotimer gives an overview of Kali Linux, ways to effectively use it, and a survey of the tools available. Although Kali Linux is primarily intended for professional penetration testers, it provides great convenience and value to developers and software testers who may be getting started in security testing. Gene demonstrates some of the simplest tools to help jumpstart your web application security testing practices.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
1. CrossTalk—May/June 2016 13
INTEGRATION AND INTEROPERABILITY
DevOps
DevOps is a software development culture that stresses col-
laboration and integration between software developers, opera-
tions personnel, and everyone involved in the design, creation,
development, and delivery of software. It is based on the same
principles that were identified in the Agile Manifesto [1], but while
many agile methodologies focus on development only, DevOps
extends the culture to the entire software development lifecycle.
Central to DevOps is continuous delivery: delivering software
often, possibly multiple times each day, using a delivery pipeline
through testing stages that build confidence that the software
is a viable candidate for deployment. Continuous delivery (CD)
is heavily dependent on automation: automated builds, testing,
and deployments. In fact, the reliance on automated deployment
is so key that DevOps and CD are often erroneously considered
synonymous with automated deployment.
Having a successful delivery pipeline means more than just
adding automation. To be effective, tests of all types must be in-
corporated throughout the process in order to ensure problems
aren’t slipping through. Those tests include quality checks, func-
tional testing, security tests, performance assessments, and any
other type of testing you require before releasing your software.
The delivery pipeline also opens up opportunities to add more
testing. Static analysis tools can review code style and test for
simple security errors. Automated deployments allow automated
functional testing, security tests of the software system as de-
ployed, and performance testing on production-like servers. Conti-
nuity of operations (COOP) plans can be tested every time that the
infrastructure changes, not just annually in front of the auditors.
With this additional testing, CD can produce software that
has fewer defects, can be deployed more reliably, and can be
delivered far more confidently than traditional methodologies.
Escaped defect rates drop, teams experience lower stress, and
delivery is driven by business need. The benefits aren’t just slight
improvements. In fact, a 2015 report on DevOps from Puppet
DevOps Advantages
for Testing
Increasing Quality through
Continuous Delivery
Gene Gotimer, Coveros
Thomas Stiehm, Coveros
Abstract. DevOps and continuous delivery can improve software quality and
reduce risk by offering opportunities for testing and some non-obvious benefits
to the software development cycle. By taking advantage of cloud computing and
automated deployment, throughput can be improved while increasing the amount
of testing and ensuring high quality. This article points out some of these oppor-
tunities and offers suggestions for making the most of them.
Labs found that teams using DevOps experience “60 times
fewer failures and recover from failures 168 times faster than
their lower-performing peers. They also deploy 30 times more
frequently with 200 times shorter lead times [2].”
The choices of tools and frameworks for all of this automation
has grown dramatically in recent years, with options available for
almost any operating system, any programming language, open
source or commercial, hosted or as-a-service. Active communi-
ties surround many of these tools, making it easy to find help to
start using them and to resolve issues.
Continuous Integration
Building a CD process starts with building a Continuous Integra-
tion (CI) process. In CI developers frequently integrate other de-
veloper’s code changes, often multiple times a day. The integrated
code is committed to source control then automatically built and
unit tested. Developers get into the rhythm of a rapid “edit-compile-
test” feedback loop. Integration errors are discovered quickly,
usually within minutes or hours of the integration being performed,
while the changes are fresh on the developer’s minds.
A CI engine, such as Jenkins [3], is often used to schedule
and fire off automated builds, tests, and other tasks every time
code is committed. The automated build for each commit makes
it virtually impossible for compilation errors and source code
integration errors to escape unnoticed. Following the build with
unit tests means the developers can have confidence the code
works the way they intended, and it reduces the chance that
changes had unintended side effects.
Important: Continuous integration is crucial in providing
a rapid feedback loop to catch integration issues and
unintended side effects.
The choice of CI engine is usually driven by the ecosystem you
are working in. Common choices include Jenkins for Linux environ-
ments and Team Foundation Server [4] for Windows environments.
Code Coverage
CI can also tie-in code coverage tools that measure the
amount of code that is executed when the unit tests are run.
Code coverage can be a good guide as to how well the code
is unit tested, which in turn tells you how easy it should be to
reorganize the code and to change the inner workings without
changing the external behavior, a process known as refactoring
[5]. Refactoring is an important part of many agile development
methodologies, such as extreme programming (XP) [6] and test-
driven development (TDD) [7].
In TDD, a test is written to define the desired behavior of a unit
of code, which could be a method or a class. The test will naturally
fail, since the code that implements the behavior is not yet written.
Next, the code is implemented until the test passes. Then, the
code is refactored by changing it in small, deliberate steps, rerun-
ning the tests after each change to make sure that the external
behavior is unchanged. Another test is written to further define
the behavior, and the “test-implement-refactor” cycle repeats.
By definition, code behavior does not change during refactoring.
If inputs or outputs must change, that is not refactoring. In those
cases, the tests will necessarily change as well. They must be main-
tained along with, and in the same way as, other source code.
2. 14 CrossTalk—May/June 2016
INTEGRATION AND INTEROPERABILITY
Without sufficient code coverage you cannot be sure that
behavior is unchanged. A change in the untested code may
have an unintended effect elsewhere. Having enough unit
testing and code coverage means you are free to do fearless
refactoring: you can change the design and implementation
of the software without worrying something will break inad-
vertently. As the software evolves and you learn more about
how the software should have been written you can go back
and make changes rather than living with early decisions. In
turn, you can move faster at the beginning by “doing the sim-
plest thing that could possibly work [8]” rather than agonizing
over every decision to (impossibly) make sure it will address
all future needs, known and unknown.
Important: Unit testing and code coverage is about
more than just testing. It also enables fearless refactor-
ing and the ability to revisit design and implementation
decisions as you learn more.
Code coverage tools are usually programming language-
dependent. JaCoCo [9] is an excellent open-source choice for
Java, Coverage.py [10] for Python, and NCover [11] is a popular
commercial tool for .NET. Every popular programming language
today is likely to have several code coverage tool options.
Mutation Testing
Code coverage can’t tell the whole story. It only counts how
many lines (or methods, or classes, etc.) are executed when the
unit tests run, not whether that code is tested well, or at all.
Mutation testing [12] is a process by which existing code is
modified in specific ways (e.g., reversing a conditional test from
equals to not equals, or flipping a true value to false) and then
the unit tests are run again. If the changed code, or mutation,
does not cause a test to fail, then it survives. That means the
unit tests did not properly test the condition. Even though code
coverage may have indicated a method was completely covered,
it might not have been completely tested.
Mutation testing generally runs many times slower than unit
tests. But if it can be done automatically then the cost of run-
ning the mutation tests is only time it takes to review the results.
Successful mutation testing leads to higher confidence in unit
tests, which leads to even more fearless refactoring.
Suggestion: Use mutation testing tools to determine how
effective your unit tests are at detecting code problems.
Tools for mutation testing are available for various program-
ming languages and unit test frameworks. Two mature tools
are PIT Mutation Testing [13] for Java and Ninja Turtles [14] for
.NET. Humbug [15] is a popular choice for PHP and many op-
tions exist for Python [16].
Static Analysis
Static analysis tools are easy to use via the CI engine. These
tools handle many of the common tasks of code review, looking at
coding style issues such as variable and method-naming conven-
tions. They can also identify duplicate code blocks, possible coding
issues (e.g., declared but unused variables), and confusing coding
practices (e.g., too many nested if-then-else statements). Having
these mundane items reviewed automatically can make manual
code reviews much more useful since they can focus on design is-
sues and implementation choices. Since the automated reviews are
objective, the coding style can be agreed upon and simply enforced
by software.
Important: Static analysis can allow manual code reviews to
concentrate on important design and implementation issues,
rather than enforcing stylistic coding standards.
Static analysis tools can also identify some serious problems.
Race conditions, where parallel code execution can lead to
deadlocks or unintended behavior, can be difficult to identify via
testing or manual code review, but they can often be detected
via static analysis. SQL and other injection vulnerabilities can also
be identified, as can resource leaks (e.g., file handle opened but
not closed) and memory corruption (e.g., use after free, dangling
pointers).
Since static analysis tools can be fast and can easily run
automatically as part of the edit-compile-test cycle, they can be
used as a first line of defense against coding errors that can
lead to serious security and quality issues.
Important: Static analysis tools can provide early
detection of some serious code issues as part of the
rapid CI feedback cycle.
Every popular programming language has a selection of static
analysis tools -- many of them open source. But even easier
than choosing one or more and integrating them with your build
process or CI engine is installing the excellent open-source
tool known as SonarQube [17]. It integrates various analyses
for multiple programming languages and displays the combined
results in an easy-to-use quality dashboard that tracks trends,
identifies problem areas, and can even fail the build when re-
sults are beyond project-defined thresholds.
Delivery Pipeline
The delivery pipeline describes the process of taking a code
change from a developer and getting it delivered to the cus-
tomer or deployed into production. CD generally evolves by ex-
tending the CI process and adding automated deployment and
testing. The delivery pipeline is optimized to remove as many
manual delays and steps as practical. The decision to deploy or
deliver software becomes a business decision rather than being
driven by technical constraints.
The delivery pipeline is often described as a series of triggers:
actions such as code being checked into the source control sys-
tem, that initiate one or more rounds of tests, known as quality
gates. If the quality gate is passed, that triggers more processes,
which lead to more quality gates. If a quality gate is not passed,
the build is not a viable candidate for production, and no further
testing is done. The problems that were discovered are fixed
and the delivery pipeline begins again.
The delivery pipeline should be arranged so the earliest
tests are the quickest and easiest to run and give the fastest
feedback. Subsequent quality gates lead to higher confidence
that the code is a viable candidate and they indicate more
3. CrossTalk—May/June 2016 15
INTEGRATION AND INTEROPERABILITY
expensive tests (in regards to time, effort, or cost) are justi-
fied. Manual tests migrate towards the end of the pipeline,
leaving computers to do as much work as possible before
humans have to get involved. Computers are significantly
cheaper than people and humans often work slower than
computers. They get sidetracked, go to meetings, and don’t
work around the clock.
The CI process is often the first stage of the delivery pipeline,
being the fastest feedback cycle. Often the CI process is block-
ing: a developer will wait until the quality gate is passed before
continuing. Quality gates later in the pipeline are non-blocking:
work continues while the quality checks are underway.
While it can be tempting to arrange the delivery pipeline in
phases (e.g., unit testing, then functional tests, then accep-
tance tests, then load and performance tests, then security
tests), this leaves the process susceptible to allowing seri-
ous problems to progress far down the pipeline, leading to
wasted time testing a non-viable candidate for release and
extending the time between making a change and identifying
any problems. Instead, quality gates should be arranged so
each one does enough testing to give confidence the next
set of tests is worth doing.
For example, after some functional tests, a quick perfor-
mance test might be valuable to make sure a change hasn’t
rendered the software significantly slower. Next, a short
security check could be done to make sure some easily de-
tectable security issue hasn’t been introduced. Then a full set
of regression tests could be run. Later, you could run more
security tests along with load and performance testing. Each
quality gate has just enough testing to give us confidence the
next set of tests is worth doing.
Suggestion: Do just enough of each type of testing early
in the pipeline to determine if further testing is justified.
Negative Testing
The first tests written are almost always sunny-day sce-
narios: does the software do what it was intended to do? We
should also make sure there are functions that the soft-
ware doesn’t do: rainy-day scenarios. For example, one user
shouldn’t be able to look at another user’s private data. Bad
input data should result in an error message. A consumer
should not be able to buy an item if they do not pay. A web-
user should not be able to access protected content without
logging in. Whenever you identify sunny-day tests, you should
also identify related rainy-day tests.
Identifying these conditions while features are being developed
will lead to more tests, which will help build more confidence that
new features aren’t inadvertently introducing security holes. The
tests will form a body of regression tests that document how the
software is intended to work and not to work. As the code gets
more complex, you will be able to fearlessly refactor knowing that
you are not introducing unintended side effects.
Important: Sunny-day testing is important, but rainy-
day testing can be just as important for regression
and security. You need to test both to be confident
the code is working correctly.
Automated deployment
Some types of testing aren’t valuable until the code is com-
piled, deployed, and run in a production-like environment. Secu-
rity scans might depend on the web server configuration. Load
and performance tests might need production-sized systems. If
deployment is time consuming, error prone, or even just frustrat-
ing, it won’t be done frequently. That means you won’t have as
many opportunities to test deployed code.
While an easy, quick, reliable manual install makes it easier
to deploy more often, having an automated process can make
deployments almost free, especially when deployments can be
triggered automatically by passing quality gates. That lets the
delivery pipeline progress without human interaction. When
there are fewer barriers to deploying, the team will realize there
are more chances to exercise the deployment process. When
combined with the flexibility of cloud computing resources,
deployments will become a regular course of action rather than
a step taken only late in the development cycle.
Important: Automated deployments will be used more often
than simple manual deployments. They will be tested more often
and the delivery pipeline will find more uses for them.
Configuration management tools that perform automated
deployments are a class of tool that has garnered a lot of
attention in recent years, and many excellent tools, frame-
works, and platforms are readily available, both commer-
cially and open source. Puppet [18], Chef [19], and Ansible
[20] lead the pack with open-source products that can be
coupled with commercial enterprise management systems.
Active ecosystems have evolved around each of them with
plenty of community support.
Using automated deployments more often gives you more
chances to validate that your deployment process works. You
can’t afford to hope that it works because it runs; you have
to verify that it successfully deployed and configured your
system or systems using an automated verification process. It
has to be quick, so you can afford to run it on each deploy-
ment. It should test the deployment, not the application func-
tionality, so focus on the interfaces between systems (e.g.,
IP addresses and firewalls), configuration properties (e.g.,
database connection settings), and basic signs of life (e.g., is
the application responding). Repeatedly deploying to different
environments and then verifying the deployment works gives
you higher confidence it will work when deploying to produc-
tion, which is the deployment that really counts.
Suggestion: Each deployment should be followed
with an automated deployment verification suite.
Make the deployment verification reusable, so the
same checks and tests can be used after each deploy-
ment, no matter which environment.
Deployment verification checks can usually be automated us-
ing the same tool you use for functional and regression testing.
If that tool is too heavyweight or can’t be easily integrated into
the pipeline, consider a lightweight functional testing framework
like Selenium [21] and/or one of the xUnit test frameworks [22],
such as JUnit [23] for Java or nUnit [24] for .NET.
4. 16 CrossTalk—May/June 2016
INTEGRATION AND INTEROPERABILITY
Exploratory Testing
Manual exploratory testing is not made obsolete by adopt-
ing automated testing. Manual testing becomes more impor-
tant since automated tests will cover the easy things, leaving
the more obscure problems undiscovered. Testers will need
increasing amounts of creativity and insight to detect these
issues, traits almost impossible to build into automation. The
very term exploratory testing highlights the undefined nature
of the testing. Automated tests will never adapt to find issues
they aren’t testing for. This is known as the paradox of auto-
mation. “The more efficient the automated system, the more
crucial the human contribution [25].”
The delivery pipeline does not have to be an unstopping
conveyor belt of releases. Human testers cannot cope with a
constant stream of new releases. They cannot deal with the
software changing mid-test or even mid-test cycle. Even when
they find one problem, there is value in continuing their tests to
see if the same problem exists in related functions, or look-
ing for unrelated issues in other parts of the code. There is a
balance to make sure time isn’t invested testing a non-viable
candidate from production and restarting a test suite to fix every
little problem individually.
Waiting for human testers to be ready to start a new test
cycle slows down the rest of the pipeline. In order to incor-
porate their testing and not constantly interrupt their test
cycles as new versions of the software are made available,
consider on-demand deployments, where the pipeline does
not deploy to the exploratory testing environment until the
testers choose it to be deployed. Or perhaps the software is
deployed automatically to a new dynamic environment each
time it is packaged, and the testers move on to the most
recent (or most important, or most promising) environment.
In this way, there is always an environment available for the
testers to use without pulling the rug out from under them
during their test cycle, thereby buffering the bottleneck [26].
While you want to reduce the time testers spend testing a
build that is not viable, you also don’t want to start so late as
to be a constraint for other activities. Consider running the
exploratory testing in parallel with other automated and non-
automated tasks, minimizing the wait by placing it at the end of
the cycle rather than the start. Think about time boxing (defining
and enforcing a fixed duration) the test cycle.
Suggestion: Deployments for manual testing must
be coordinated so testers can have a stable environ-
ment. Consider on-demand deployments, and make
sure the pipeline is only waiting at the end of manual
testing, not the beginning.
Parallel Testing
Just as with the exploratory testing, other long-running tests
should be run in parallel to make progress while waiting for lon-
ger tests to complete. Taking advantage of automated deploy-
ments, multiple environments can be built so some tests can be
done at the same time using different resources. This can mean
doing multiple types of tests at one time, or breaking one type
of tests into smaller chunks that can be handled in parallel.
Often four one-day-long tasks are preferable to one four-day-
long task because the shorter tasks give additional opportunities for
feedback. The fourth day might not need to be needed if there is a
show-stopper identified on day three. In parallel, those tests might
be run in two parallel tracks, taking a total of two days only. Or per-
haps a two-day stress test can be undertaken in parallel with a two-
to-three day security scan, to reduce the effect of the bottleneck.
Suggestion: Long-running testing should be done in
parallel as much as practical, so that you don’t have
to wait days or weeks for individual test phases to be
completed in sequence.
Infrastructure
Development teams need infrastructure to get their work
done. Source code repositories, CI engines, test servers,
certificate authorities, firewalls, and issue tracking systems
are all examples of tools that might be required, but they are
often not deliverables for the project.
Infrastructure doesn’t stay static. Systems need to be moved
or replicated. They get resized. Applications, tools, and operating
systems get upgraded. Hardware goes bad. And other projects
need to use the same or similar infrastructure. Setting up your
infrastructure is never a one-time occurrence. Even though this
infrastructure is internal-facing, it quickly becomes mission criti-
cal to the development team.
Treat it like you do production code. Automate the deployment
so that redeploying is as easy as pushing out a new version of the
software you are writing. Use the same automated deployment
tools since you already have experience and tools to support them.
Suggestion: Use your familiarity with the automated
deployment tools to automate your infrastructure de-
ployments as well. Treat automated deployment code
and infrastructure as mission critical.
Case Study – Forge.mil
DISA’s Forge.mil supports collaborative development for the
DoD. It is built using commercial off-the-shelf software coupled
with open-source tools and custom integration code, written in a
variety of programming languages (e.g., Java, PHP, Python, Perl,
Puppet). The team used agile techniques from the beginning
in order to maximize throughput for the small team doing the
integration and development work. The project also served as
an exemplar project to demonstrate and document how agile
techniques could be used within DoD projects.
An early focus on continuous integration led the team to
identify several bottlenecks in the delivery process. Functional
testing was manual, slow, and hard to do comprehensively. De-
velopment, test, and integration environments were all config-
ured differently from each other and different than production.
Deployments were manual, long, complicated, and unreliable.
Security patches were often applied directly into production with
limited testing, almost always in response to information assur-
ance vulnerability alerts (IAVAs). A team of about two dozen de-
velopers, testers, integrators, managers, and others were deliver-
ing software to production once every six months. A software
release was a big, scary event, carefully planned and scheduled
5. CrossTalk—May/June 2016 17
INTEGRATION AND INTEROPERABILITY
weeks in advance by the entire team. Problems were identified
in the days after each release (often by end users), carefully tri-
aged, with hot fixes deployed or workarounds documented.
The team focused on removing some of these bottlenecks,
concentrating on improved functional and regression testing.
After discovering the book Continuous Delivery by Jez Humble
and Dave Farley [27], they began using Puppet scripts for con-
figuration management which greatly improved the reliability of
production deployments. Consistent, production-like deployments
in other environments could be performed on-demand in minutes,
many times a week. Proactive security testing and vulnerability
patching became convenient and did not disrupt other develop-
ment and testing activities. The bottlenecks the team had identi-
fied earlier were eliminated or greatly reduced, one-by-one.
Over time, the team size decreased to less than a dozen
people. Software was confidently deployed to production
every two weeks with neither drama nor concern. Full regres-
sion tests, performance tests, and security tests were regular
occurrences multiple times a week. Security patches were
incorporated into the normal release cycle, often being fully
tested and deployed to production before the IAVAs were
even issued. Reports of issues after releases (aka escaped
defects) disappeared almost completely. Software releases
were driven by business needs and the project management
office, not by technical limitations and risks identified by the
developers, testers, and integrators.
More details are available in Continuous Delivery in a Legacy
Shop - One Step at a Time [28], originally presented at DevOps
Conference East 2015 in Orlando, Florida.
1. Humble, Jez, and David Farley. Continuous Delivery: Reliable Software
Releases through Build, Test, and Deployment Automation. Upper Saddle River,
NJ: Addison-Wesley, 2011. Print.
2. Kim, Gene, Kevin Behr, and George Spafford. The Phoenix Project: A Novel about IT,
DevOps, and Helping Your Business Win. Portland, Oregon: IT Revolution, 2013. Print.
3. Duvall, Paul M., Steve Matyas, and Andrew Glover. Continuous Integration:
Improving Software Quality and Reducing Risk. Upper Saddle River,
NJ: Addison-Wesley, 2007. Print.
4. Fowler, Martin, and Kent Beck. Refactoring: Improving the Design of Existing Code.
Reading, MA: Addison-Wesley, 1999. Print.
FURTHER READING
Conclusion
The journey towards a continuous delivery practice relies
heavily on quality tests to show if the software is (or is not) a
viable candidate for production. But along with the increased
reliance on testing, there are many opportunities for performing
additional tests and additional types of tests to help build con-
fidence in the software. By taking advantage of the automated
tests and automated deployments, the quality of the software
can be evaluated and verified more often and more com-
pletely. By arranging the least expensive tests (in terms of time,
resources, and/or effort) first, a rapid feedback loop creates
openings to fix issues sooner and focus more expensive testing
efforts on software that you have more confidence in. By having
a better understanding of the software quality, the business can
make more informed decisions about releasing the software,
which is ultimately one of the primary goals of DevOps.
CALL FOR ARTICLES
If your experience or research has produced information that could be useful to others,
CrossTalk can get the word out. We are specifically looking for articles on soft-
ware-related topics to supplement upcoming theme issues. Below is the submittal
schedule for the areas of emphasis we are looking for:
Supply Chain Risks in Critical Infrastructure
Sep/Oct 2016 Issue
Submission Deadline: Apr 10, 2016
Beyond the Agile Manifesto
Nov/Dec 2016 Issue
Submission Deadline: Jun 10, 2016
Please follow the Author Guidelines for CrossTalk, available on the Internet at
<www.crosstalkonline.org/submission-guidelines>. We accept article submissions on
software-related topics at any time, along with Letters to the Editor and BackTalk. To see
a list of themes for upcoming issues or to learn more about the types of articles we’re
looking for visit <www.crosstalkonline.org/theme-calendar>.
6. 18 CrossTalk—May/June 2016
INTEGRATION AND INTEROPERABILITY
Gene Gotimer is a proven senior software architect with many years of experience in web-based enter-
prise application design, most recently using Java. He is skilled in agile software development as well as
legacy development methodologies, with extensive experience establishing and using development ecosys-
tems including: continuous integration, continuous delivery, DevOps, secure software development, source
code control, build management, release management, issue tracking, project planning & tracking, and a
variety of software assurance tools and supporting processes.
ABOUT THE AUTHORS
1. Beck, Kent, Mike Beedle, Arie Van Bennekum, Alistair Cockburn, Ward Cunningham,
Martin Fowler, James Grenning, Jim Highsmith, Andrew Hunt, Ron Jeffries, Jon
Kern, Brian Marick, Robert C. Martin, Steve Mellor, Ken Schwaber, Jeff Sutherland,
and Dave Thomas. “Principles behind the Agile Manifesto.” Agile Manifesto. N.p.,
2001. Web. 10 Dec. 2015. <http://agilemanifesto.org/principles.html/>.
2. Puppet Labs 2015 State of DevOps Report. Rep. PwC US, 22 July 2015. Web. 10
Dec. 2015. <https://puppetlabs.com/2015-devops-report>
3. “Welcome to Jenkins CI!” Jenkins CI. CloudBees, n.d. Web. 17 Dec. 2015.
<https://jenkins-ci.org/>.
4. “Team Foundation Server.” Team Foundation Server. Microsoft, n.d. Web. 19 Jan.
2016. <https://www.visualstudio.com/en-us/products/tfs-overview-vs.aspx>.
5. Fowler, Martin, and Kent Beck. Refactoring: Improving the Design of Existing Code.
1st ed. Reading, MA: Addison-Wesley, 1999. Print.
6. “Extreme Programming.” Wikipedia. Wikimedia Foundation, n.d. Web. 21 Jan. 2016.
<https://en.wikipedia.org/wiki/Extreme_programming>.
7. “Test-driven Development.” Wikipedia. Wikimedia Foundation, n.d. Web. 21 Jan.
2016. <https://en.wikipedia.org/wiki/Test-driven_development>.
8. “Do The Simplest Thing That Could Possibly Work.” Do The Simplest Thing That
Could Possibly Work. Cunningham & Cunningham, Inc., n.d. Web. 12 Dec. 2015.
<http://c2.com/cgi/wiki?DoTheSimplestThingThatCouldPossiblyWork>
9. “JaCoCo Java Code Coverage Library.” EclEmma. N.p., n.d. Web. 19 Jan. 2016.
<http://eclemma.org/jacoco/>.
10. “Coverage.” Python Package Index. N.p., n.d. Web. 19 Jan. 2016. <https://pypi.
python.org/pypi/coverage>.
11. “NCover | .NET Code Coverage for .NET Developers.” NCover. Gnoso, n.d. Web. 19
Jan. 2016. <http://www.ncover.com/>.
12. “Mutation testing.” Wikipedia. Wikimedia Foundation, n.d. Web. 12 Dec. 2015.
<https://en.wikipedia.org/wiki/Mutation_testing>
13. “Real World Mutation Testing.” PIT Mutation Testing. N.p., n.d. Web. 19 Jan. 2016.
<http://pitest.org/>.
14. “NinjaTurtles - Mutation Testing for .NET (C#, VB.NET).” NinjaTurtles. N.p., n.d. Web.
10 June 2015. <http://www.mutation-testing.net/>.
15. “Humbug.” GitHub. N.p., n.d. Web. 19 Jan. 2016. <https://github.com/padraic/humbug>.
16. “Index of Packages Matching ‘mutationtesting’.” Python Package Index. N.p., n.d.
Web. 19 Jan. 2016. <https://pypi.python.org/pypi?%3Aaction=search&term=mutati
on%2Btesting&submit=search>.
17. “Put Your Technical Debt under Control.” SonarQube™. SonarSource, n.d. Web. 20
Jan. 2016. <http://www.sonarqube.org/>.
18. “Open Source Puppet.” Puppet Labs. Puppet Labs, n.d. Web. 20 Jan. 2016. <https://
puppetlabs.com/puppet/puppet-open-source>.
19. “Chef.” Chef. Chef Software, Inc., n.d. Web. 20 Jan. 2016.
<https://www.chef.io/>.
20. “Ansible Is Simple IT Automation.” Ansible. Ansible, Inc., n.d. Web. 20 Jan. 2016.
<http://www.ansible.com/>.
21. “Selenium - Web Browser Automation.” SeleniumHQ. N.p., n.d. Web. 20 Jan. 2016.
<http://www.seleniumhq.org/>.
22. “XUnit.” Wikipedia. Wikimedia Foundation, n.d. Web. 20 Jan. 2016. <https://
en.wikipedia.org/wiki/XUnit>.
23. “JUnit.” JUnit. N.p., n.d. Web. 20 Jan. 2016. <http://junit.org/>.
24. “NUnit.” NUnit. N.p., n.d. Web. 20 Jan. 2016. <http://www.nunit.org/>.
25. “Automation.” Wikipedia. Wikimedia Foundation, n.d. Web. 17 Dec. 2015. <https://
en.wikipedia.org/wiki/Automation#Paradox_of_Automation>
26. Goldratt, Eliyahu M. “Theory of Constraints.” Wikipedia. Wikimedia Foundation, n.d.
Web. 17 Dec. 2015. <https://en.wikipedia.org/wiki/Theory_of_constraints>
27. Humble, Jez, and David Farley. Continuous Delivery: Reliable Software
Releases through Build, Test, and Deployment Automation. Upper Saddle River,
NJ: Addison-Wesley, 2011. Print.
28. Gotimer, Gene. “Continuous Delivery in a Legacy Shop - One Step at a Time.” Slide-
Share. Coveros, Inc., 12 Nov. 2015. Web. 20 Jan. 2016. <http://www.slideshare.net/
ggotimer/continuous-delivery-in-a-legacy-shop-one-step-at-a-time>.
REFERENCES
Tom Stiehm has been developing applications and managing software development teams for twenty
years. As CTO of Coveros, he is responsible for the oversight of all technical projects and integrating new
technologies and application security practices into software development projects. Most recently, Thomas
has been focusing on how to incorporate DevOps best practices into distributed agile development proj-
ects using cloud-based solutions and how to achieve a balance between team productivity and cost while
mitigating project risks. Previously, as a managing architect at Digital Focus, Thomas was involved in agile
development and found that agile is the only development methodology that makes the business reality of
constant change central to the development process.