Programmers aren’t perfect. Testing and manual code reviews can’t find every problem in code. So, bugs persist. And it’s only going to get worse as your systems grow larger and more complex.
How can you find critical problems in your code? And still release a quality product on time?
Static code analysis might be the answer you’re looking for.
Find out why:
-Bug-free software is hard to achieve.
-Automated tools are the way to go.
-Safe, secure, and reliable software can be achieved at lower costs.
Plus, you’ll see examples of bugs easily missed by manual code reviews. And you’ll learn how static code analysis and manual code reviews work together.
29. Follow us for news and insights!
Visit www.perforce.com
Editor's Notes
Hello and thank you for joining our webinar. Today we’re going to talk about why bug free software is so hard to achieve, how automated tools are the way to go for helping improve software quality, and we’re going to show how safe, secure and reliable software can be achieved with lower development costs.
We recently asked software developers who attended our webinars which coding standards they used. Over a quarter (27%) of them told us that they did not use any coding standard. Fortunately this mans that around ¾ of those engineers do code according to a coding standard. A coding standard is useful for any organization to ensure consistent code style, which makes it easier for teams to understand and maintain code. For programming languages such as C and C++, the purpose of a coding standard goes well beyond just improving the maintainability of code. It serves to prevent dangerous use of language features that can result in unintended, undefined, or unspecified behavior that can result in serious safety flaws and security vulnerabilities in the end product. C and C++ afford great flexibility to programmers – and this flexibility is needed in the design of embedded systems, or systems where performance/low runtime overhead and real-time operation is critical. The flipside is that it is very easy for even the most experienced developer to introduce errors.
Programmers aren’t perfect. Manual code reviews and testing will never find every problem in code. This means that bugs persist. And it’s only going to get worse as your systems grow larger and more complex.
How can you find critical problems in your code? And still release a quality product on time?
Static code analysis might be the answer you’re looking for. In this webinar I will give you a brief introduction to static code analysis. I will talk about why bug-free software is hard to achieve, why automated tools are the way to go, and show you that Safe, secure, and reliable software can be achieved at lower costs.
As it’s intended to be an introduction to the topic we won’t be going into deep technical territory, but you will see a couple of examples of the types of bugs that can be easily missed by manual code review, but are easily caught by a static analyzer.
Software is everywhere
Our world is increasingly driven by software. Many of the products we use every day behave according to rules defined by a software designer and implemented as program code. Defects introduced during coding may go undetected by testing and surface later on with catastrophic, even fatal consequences.
There have been a number of well documented examples – there are probably many hundreds of similar events that go unreported.
Toyota unintended acceleration
There were probably a number of causes of Toyota’s famous unintended acceleration cases such as stuck gas pedals and badly designed floor mats, but an extensive 20 month long analysis of Toyota’s source code by Michael Barr, the well respected embedded software specialist, found:
There are a large number of functions that are overly complex. By the standard industry metrics some of them are untestable, meaning that it is so complicated a recipe that there is no way to develop a reliable test suite or test methodology to test all the possible things that can happen in it. Some of them are even so complex that they are what is called unmaintainable, which means that if you go in to fix a bug or to make a change, you're likely to create a new bug in the process..And the conclusion is that the failsafes are inadequate. The failsafes that they have contain defects or gaps. But on the whole, the safety architecture is a house of cards. It is possible for a large percentage of the failsafes to be disabled at the same time that the throttle control is lost.
Image: https://www.manufacturing.net/blog/2016/08/2009-toyota-accelerator-scandal-wasnt-what-it-seemed
In March 2017 a software glitch caused a Canadian Cyclone helicopter to experience a sudden loss of altitude
Fortunately no one died in this incident. The problem corrected itself and the pilot safely landed the plane, but the problem grounded the aircraft for nine weeks and created delays in training air crew.
In July 2015 Fiat Chrysler recalled 1.4 million vehicles at risk of wireless hack
Cars, SUVs and trucks are increasingly connected to the Internet and vulnerable to hacker attacks
In April 2017, Newport Medical Instruments Inc. recalled its HT70 ventilator due to unexpected shut downs
Reason for Recall
Newport Medical Instruments Inc., now a part of Medtronic, is recalling the Newport™ HT70 and Newport™ HT70 Plus ventilators because a software problem may cause the ventilator to shut down unexpectedly without sounding an alarm. If the ventilator shuts down, the patient may not receive enough oxygen and could suffer serious adverse health consequences such as brain damage, or even death. ”
Image: https://www.medscape.com/viewarticle/878248
Bug-free software is hard to achieve
Programmers are not perfect
Capers Jones is a very well known expert on software quality. He has collected data on thousands of real software projects over many years.
In his excellent paper entitled “SOFTWARE DEFECT ORIGINS AND REMOVAL METHODS”, he explains that the software industry spends about $0.50 out of every $1.00 expended for development and maintenance on finding and fixing bugs.
In fact, the cost of finding and fixing bugs or defects is the largest single expense element in the history of software.
Jones quality measurements have shown that individual programmers are less than 50% efficient in finding bugs in their own software.
Static analysis –often more than 65% efficient; has topped 95%
(http://sqgne.org/presentations/2012-13/Jones-Sep-2012.pdf)
Testing has been the primary software defect removal method for more than 50 years.
The problem is that most forms of testing are only about 35% efficient or find only one bug out of three. Defects in test cases themselves and duplicate test cases lower test defect removal efficiency. About 6% of test cases have bugs in the test cases themselves. Pre-test code inspections and static analysis can help to raise testing efficiency.
Testing by itself without any pre-test inspections or static analysis is not sufficient to achieve high quality levels.
Using static analysis can help to identify the areas of the code that may need more testing (e.g. by measuring complexity), and therefore it can help to improve testing efficiency.
Pre-test defect removal is not just about code! The major forms of pre-test defect removal include:
Desk checking by developers
Debugging tools (automated)
Pair programming (with caution)
Quality Assurance (QA) reviews of major documents and plans
Formal inspections of requirements, design, code, UML, and other deliverables
Formal inspections of requirements changes
Informal peer reviews of requirements, design, code
Editing and proof reading critical requirements and documents
Text static analysis of requirements, design
Code static analysis of new, reused, and repaired code
Running FOG and FLESCH readability tools on text documents
Requirements modeling (automated)
Automated correctness proofs
Refactoring
Independent verification and validation (IV&V)
Pre-test inspections have more than 40 years of empirical data available and rank as the top method of removing software defects, consistently topping 85% in defect removal efficiency (DRE). Static analysis is a newer method that is also high in DRE, frequently toping 65%. Requirements modeling is another new and effective method that has proved itself on complex software such as that operating the Mars Rover. Requirements modeling and inspections can both top 85% in defect removal efficiency (DRE).
Large, complex systems
https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/
It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code.
Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code. When you press your foot down on your car’s accelerator, for instance, you’re no longer controlling anything directly; there’s no mechanical link from the pedal to the throttle. Instead, you’re issuing a command to a piece of software that decides how much air to give the engine. The car is a computer you can sit inside of. The steering wheel and pedals might as well be keyboard keys.
Code bases are becoming larger and more complex – for example, it is often quoted that:
“There’s 100 million lines of code in cars now”.
This will only grow as more and more driver assistance features and self-driving capabilities are added.
As code bases become larger and more complex it becomes even more difficult to find and fix bugs.
Image: http://www.todayifoundout.com/wp-content/uploads/2015/09/pole-vaulting.png
Few software projects are developed from scratch by a team within a single organization. Most use existing/legacy code plus externally sourced components, some of which may be open source.
Reused code from legacy applications and external sources can be a major source of defects.
Open source components are the building blocks of software. Their widespread reuse among developers makes them prime targets for cybercriminals. Since a reported vulnerable open source component could be used in thousands of products, they represent a gift for attackers.
A security weakness that came to be known as “Devil’s Ivy” gave hackers access to millions of connected devices
A stack buffer overflow vulnerability was found in a security camera made by Axis Communications. The vulnerability exists in open source gSOAP software that is used in millions of connected devices.
It is likely that tens of millions of products -- software products and connected devices -- are affected by Devil’s Ivy to some degree.
Image: https://commons.wikimedia.org/wiki/File:Concrete_Reinforcement_of_building_structure.JPG
For any real-world application it is simply not possible to test every possible execution path. You cannot test every valid input, and conversely you cannot test every invalid input
So, the testing effort is constrained by time and budget which means there are always compromises. It is impossible to know whether testing has found every bug – even if you do find the last bug you will never know it!
Roger Pressman, an internationally recognized consultant and author in software engineering notes in his book, Software Engineering: A Practitioner’s Approach: “exhaustive testing presents certain logical problems… Even a small 100-line program with some nested paths and a single loop executing less than twenty times may require 10 to the power of 14 possible paths to be executed… To test all of these 100 trillion paths assuming each could be evaluated in a millisecond, would take 3170 years”.
Because of all these challenges, more and more development teams are adopting automated tools such as static code analyzers for pre-test defect removal. Let’s take a look at what static code analyzers can do, how static analysis compares with manual code reviews or code inspections, and how it stacks up against dynamic testing and analysis.
What is static code analysis?
Development teams are under pressure. Quality releases needed to be delivered on time. Coding and compliance standards need to be met. And mistakes are not an option.
That’s why development teams are using static analysis.
Static code analysis is a method of debugging by examining code before a program is run. It’s done by analyzing a set of code against a set (or multiple sets) of coding rules. This is usually done in early stages of development.
For organizations practicing DevOps, static code analysis takes place during the “Create” phase. Static code analysis also supports DevOps by creating an automated feedback loop. Developers will know early on if there are any problems in their code. And it will be easier to fix those problems.
This type of analysis addresses weaknesses in code that might lead to vulnerabilities. Of course, this may also be achieved through manual code reviews. But using automated tools is much more effective.
Static analysis is commonly used to comply with coding guidelines — such as MISRA. And it’s often used for complying with industry standards — such as ISO 26262.
The sophistication of the analysis varies greatly depending on the tool employed. The simplest tools often only search source code for text pattern matches or calculate basic program metrics (such as complexity measures) to determine the likelihood of problems arising from a given code segment.
More advanced static-analysis tools act as an advanced compiler for the source code, deeply analyzing both execution and data flow for faults that may lead to a field failure. The most advanced tools will also include link information across multiple translation units – cross module analysis to determine higher level problems.
Static analysis of source code doesn't represent new technology. It’s commonly used during implementation and review to detect software implementation errors.
Many studies have shown the effectiveness of static analysis.
Foe example, One study showed that static analysis reduced software defects by a factor of six1. Another study looking at the quality of a Java project showed that it detected 60% of post-release failures.2
- presumably missed by manual code reviews and testing.
Xiao, S. and C. H. Pham, "Performing high efficiency source code static analysis with intelligent extensions," APSEC, 2004, pp. 346"355. (Asia Pacific Software Engineering Conference)
2. Q. Systems, "Overview large Java project code quality analysis," QA Systems, Tech. Rep., 2002.
Safety-critical software developers have long been proponents of using static-analysis tools. However, static-analysis tools also offer many advantages to those working in less critical areas.
Static-analysis techniques can detect buffer overflows, security vulnerabilities, memory leaks, timing anomalies (such as race conditions and deadlocks), unused source code segments, and other common programming mistakes.
In fact static analysis can and does frequently find coding errors that are missed in manual code reviews and by testing.
Manual code review vs. static analysis
The traditional approach to avoiding these issues is to conduct a manual code review. This involves at least one other developer inspecting the source code to check:
Functionality – the program is expected to execute functions according to the design
Integrity – the program is not expected to behave in any undefined or unspecified manner
Style – the source code is written in accordance with the required coding style to aid maintainability
Manual code reviews are time consuming, labor intensive, and prone to errors. It is not practical for a manual code reviewer to follow every possible execution path. It is not easy to determine the effects of functions and variables external to the file being viewed. The results of a manual code review will be heavily influenced by the expertise of the reviewers, and the personal relationships they have with the other team members.
Some Benefits of Automated static analysis:
Full code coverage
Static analyzers even check code fragments which get control very rarely. These code fragments usually cannot be tested through other methods. It allows you to find defects in exception handlers, or in the logging system.
Inexpensive
Much faster than a manual code review and doesn’t tie up the time of the developers – they can focus more on developing!
Supports Continuous Integration
In addition to developers performing local scans before they check-in code, full project scans can be scheduled on a centralized build server.
Education
Developers will pick up best practice coding hints and automatically consider them when coding – improving their efficiency over time.
Some Benefits of Manual Code Review
Manual code reviews still have a place – static analysis should always be used in conjunction with manual code reviews.
Find Design and Logical Flaws
An automated tool cannot know the actual intent of the code. (Although as we shall see in a minute, the best static analyzers – those which perform program flow analysis - can point out areas that may need attention).
Education
Reviewing other people’s code can be a great way to share safe, secure coding knowledge
It is widely acknowledged that a combination of manual reviews and automated static code analysis is the best way…
Should we add something about how static code analysis makes testing less expensive/more effective, too?
Here is an example of a problem that could easily be missed by a manual reviewer:
StringListConfigControl is a class derived from ConfigControl. StringListConfigControl releases memory in its destructor.
An instance of StringListConfigControl is dynamically instantiated as a pointer to a ConfigControl. When delete (p_control) is called the destructor of the implementing class is not called and the memory is not released.
This code is probably located is a .cpp implementation file, with the fix in a header file, so it is easy for a manual reviewer to miss this type of bug.
Of course, the fix is very easy - declare the base classes destructor as virtual to ensure the the destructor of the implementing class gets called.
Here’s another example of something that could easily be missed during a manual review.
If this code is scanned by Perforce’s QAC analyzer, it very quickly reports a line of redundant code – i.e.
2985 DF_Redundancy This operation is redundant. The value of the result is always that of the left-hand operand.
The logic is as follows:
For the second if statement to be true, min needs to be zero. For min to be zero, it must have been initialized via the ‘else’ statement. This means that either interval->min is zero, or interval->min is greater than offset. If interval->min is greater than offset, then min will not be zero, in which case the highlighted line will not be executed. If interval->min is zero, then for min to be zero, offset must also be zero, in which case the highlighted operation is redundant (subtracting zero from interval->max).
Though not a bug in itself, the fact that there is redundant code probably indicates a logical flaw where the code is not doing what the developer intended.
Static Analysis vs Dynamic Testing
Whereas static analysis looks at code before it is executed – in fact, even before it is compiled - dynamic code analysis is used during testing to monitor code execution. Unit tests may be run for individual functions, typically with a testing framework which measures code coverage and checks for problems such as memory access violations. Some dynamic analysis tools require extra instrumentation code to be inserted and this can affect the performance of the software.
https://www.testingexcellence.com/static-analysis-vs-dynamic-analysis-software-testing/
Dynamic code analysis advantages:
It can identify runtime performance issues.
It allows for analysis of applications in which you do not have access to the actual code.
It identifies defects that may have been missed by static code analysis.
Dynamic code analysis limitations:
Cannot guarantee the full test coverage of the source code
Needs a fully working executable
Can only be as good as the test design – indeed if the dynamic tests are driven by some kind of script there may be bugs in the script!
It is more difficult to trace the defect back to the exact location in the code, taking longer to fix the problem.
So, static analysis doesn't depend on the compiler you are using and the environment where the compiled program will be executed. It allows you to find hidden errors which may reveal themselves only a few years after they were created. For instance, undefined behavior errors. Such errors can occur when switching to another compiler version, or when using other code optimization switches.
But, the main advantage of static analysis is that it enables you to greatly reduce the cost of eliminating defects in software. The earlier an error is detected, the lower the cost of fixing it. Thus, according to the data given in the book "Code Complete" by McConnell, fixing an error at the stage of system testing costs ten times more than at the code writing stage:
This table, taken from book, indicates the relative average cost of fixing defects depending on the time they have been made and detected.
Static analysis tools allow you to quickly detect a lot of errors at the coding stage, which significantly reduces the cost of development for the whole project
Detecting common errors through other methods is usually extremely inefficient, and a waste of time and effort.
Integrating into the SDLC
Introducing static analysis into an existing process, especially one that is operating on a large legacy code base can be daunting, but hopefully I’ve convinced you that it will yield significant reductions in future development testing effort and field failures. Running a static analyzer against a legacy code base is likely to yield what is sometimes termed the notorious “wall of bugs”. A lot of people will give up at this stage –the software is working – why bother to fix what isn’t broken? The problem is that safety issues can arise and security vulnerabilities can be identified by criminals at any time. To alleviate the wall of bugs challenge, fully featured static code analysis solutions allow bug prioritization, baselining and diagnostic suppression features. These features allow you to devise a strategy to actively manage your technical debt.
Tool integration is another aspect to consider: An unintegrated software development and delivery toolchain creates bottlenecks, drains productivity, impedes collaboration and inhibits project visibility.
So, in order to realize the full benefits of static analysis it is critically important to ensure that your tool integrates with the other tools that your developers are using in their day to day work. Developers spend most of their time writing and debugging within an integrated development environment, or IDE, for example Eclipse or Microsoft Visual Studio, and so they need to be able to trigger static analysis and view results as they edit code within this environment.
Perforce’s QAC and QAC++ static code analyzers are known as best-in-class tools and are considered the gold standard in safety and mission-critical industries such as Automotive, but as we’ve seen can be applied to any industry….
Unlike some free, open source and less expensive tools, our analyzers combine different methods to find a higher proportion of bugs – “Recall”, while at the same time ensuring that reported diagnostics represent issues that need investigation (Precision).
Our tools are used by some of the largest safety critical engineering teams working on huge, complex code bases.
Unlike some less expensive tools, our software is independently certified for use in safety critical environments
QAC and QAC++ easily integrate with existing tools. They come ready-supplied with IDE and build server integrations, plus there is a command line interface, so they can be driven from your own custom scripts.
These tools are developed by PRQA (formerly known as Programming Research Ltd), which was acquired by Perforce in early May.
We are very excited to add QAC and QAC++ to the Perforce product portfolio — and to help you develop safe, secure, and reliable software faster (and at lower costs).
We’ve come to the end of the webinar, just to recap….
To find out more about Perforce’s static code analysis solutions, please visit perforce.com, or email us at info@perforce.com with any questions, or to organise a free demo or evaluation of any of our software products.
Facebook: https://www.facebook.com/Perforce/
LinkedIn: https://www.linkedin.com/company/perforce-software?trk=top_nav_home
Twitter: https://twitter.com/perforce
Blog: https://www.perforce.com/blog
So, I hope you’ve enjoyed this webinar. All that remains is for me to say thank you very much for attending