SlideShare a Scribd company logo
1 of 3
Download to read offline
Static	analysis	is	most	efficient	when	
being	used	regularly.	We'll	tell	you	why...
Author: Evgeniy Ryzhkov
Date: 22.04.2013
Some of our users run static analysis only occasionally. They find new errors in their code and, feeling
glad about this, willingly renew PVS-Studio licenses. I should feel glad too, shouldn't I? But I feel sad -
because you get only 10-20% of the tool's efficiency when using it in such a way, while you could obtain
at least 80-90% if you used it otherwise. In this post I will tell you about the most common mistake
among users of static code analysis tools.
Discovering static analysis
Let's at first discuss the simplest scenario which is the most common with those who try the static code
analysis technology for the first time. Some member of a developer team once came across an article, or
a conference lecture, or even an advertisement of some static analyzer and decided to try it on his/her
project. I'm not telling about PVS-Studio in particular - it might be any static analysis tool. Now the
programmer, more or less easily, deploys a static analysis system and launches code analysis. The
following consequences are possible:
• The tool fails to work. It doesn't pick up the project settings, or gathers the environment
parameters incorrectly, or fails in any other way. The user naturally grows less inclined to trust
it.
• The tool performs the check successfully and generates some diagnostic messages. The user
studies them and finds them irrelevant. It doesn't mean that the tool is absolutely poor; it just
has failed to demonstrate its strong points on this particular project. Perhaps it should be given
a chance with another one.
• The tool generates a few relevant messages (among others) which obviously indicate that
genuine bugs are present in the code.
Strictly speaking, it's in the third case, when something real is found, that the team starts using a tool in
practice.
The biggest mistake you can make when adopting static analysis
But one may make a great mistake at this point when integrating static analysis into the development
process: namely, you may accept a practice to run static analysis before every release, for instance. Or
run it once a month. Let's see why such an approach is bad:
First, since you don't use the false positive suppression mechanism, you will see the same old false
positives again and again. And therefore you will have to waste time to investigate them. The more
messages the analyzer generates, the less focused the programmer is.
Second, diagnostic messages are generated even for the code which you didn't touch between the
checks. This means you'll get even more messages to examine.
Third, and most important, with such an approach you won't get the static analyzer to find all those
errors you were catching through other methods for so long and with so much sadness between two
checks. This thing is very important, and I want to discuss it in detail. It should be done also because it is
the thing people forget about when estimating the usefulness of static analysis. See the next section.
A fallacy: "Efficiency of static analysis can be estimated by comparing
analysis results for the last year's code base release and the current one"
Some programmers suggest using this method to estimate efficiency of static code analysis. Imagine a
team working on some project for several years. It keeps all the project release versions (1.0, 1.1., 1.2,
etc.). It is suggested that they get the latest version of some code analyzer and run it on the last year's
project source codes - say, version 1.3. Then the same version of the static analyzer should be run on the
latest code base release - let it be version 1.7. After that we get two reports by the analyzer. We study
the first report to find out that the older project contains 50 genuine errors. Then we study the second
report and see that the latest project contains 20 bugs out of those 50 ones (and some new ones, of
course). It means that 50-20 = 30 bugs have been fixed through alternative methods without using the
analyzer. These errors could have been found, for instance, through manual testing, or by users when
working with the release version, or otherwise. We draw a conclusion that the static analyzer could have
helped to quickly detect and fix 30 errors. If this number is pretty large for the project and developers'
time is expensive, we may estimate economic efficiency of purchasing and using the static code
analyzer.
This approach to economic efficiency estimation is absolutely incorrect! You cannot use it to evaluate a
static analyzer! The reason is that you make several errors at once when trying to do that.
First, you don't take into account those bugs which have been added into version 1.4 of the code base
and eliminated in version 1.6. You may argue: "Then we should compare two releases in succession, for
example 1.4 and 1.5!". But it is wrong too, since you don't take account of errors which appeared after
release 1.4 and were fixed before release 1.5.
Second, code base release versions are in themselves already debugged and contain few bugs - unlike
the current version the developers are working on. I believe the release wasn't as buggy and crash-
prone, was it? You surely fix bugs between releases, but you detect them through other methods which
are naturally more expensive.
Here we should remind you of the table demonstrating the dependency of the cost of bug fixes on the
time they were added into the code and detected. You should understand that running static analysis
only "before a release" automatically increases the cost of bug fixes.
Thus, you cannot truly evaluate efficiency of static analysis by simply running it on the last year's code
base release and the current one and comparing the results.
Tips on how to obtain the maximum benefit when adopting static
analysis
During the time we have been working in the field of static code analysis, we have worked out several
practices to obtain the maximum benefit from using static analysis tools. Although I will specify how
these mechanisms are supported in PVS-Studio, the tips can be used with any other static analysis tool.
Mark false positives to get fewer messages to study the next time
Any static code analyzer generates false positives. This is the nature of the tool and it can't be helped. Of
course, everybody tries to reduce the number of false positives, but you can't make it zero. In order not
to get the same false positives again and again, a good tool provides you with a mechanism to suppress
them. You can simply mark a message with "false positive" and thus tell the analyzer not to generate it
at the next check. In PVS-Studio, it is the "Mark As False Alarm" function responsible for this. See the
documentation section Suppression of false alarms for details.
Despite being very simple, this recommendation may help you to save much time. Moreover, you will
stay more focused when you have fewer messages to examine.
Use incremental analysis (automated check of freshly recompiled files)
The effective way of handling incremental analysis is to integrate it into IDE for the tool to be able to be
launched automatically when compiling freshly modified files. Ideally, incremental analysis should be
run by all the developers currently working on the code base on their computers. In this case, many
bugs will be detected and fixed before the code gets into the version control system. This practice
greatly reduces the "cost of bug fixes". PVS-Studio supports the incremental analysis mode.
Check files modified in the last several days
If you for some reason cannot install the analyzer on all the developers' computers, you may check the
code once in several days. In order not to get a pile of messages referring to old files, static analysis tools
provide the option "check files modified in the last N days". You can set it to 3 days, for example.
Although from the technical viewpoint nothing prevents you from setting this parameter to any number
(say, 7 or 10 days), we don't recommend you to do that. When you check the code just once in 10 days,
you repeat the "occasional use of the tool" mistake. You see, if a bug is added today, found by testers
tomorrow, described in the bug-tracker the day after tomorrow, and fixed in 5 days, running analysis
once in 10 days will be useless.
But the capability of checking the code once in two or three days may appear very useful. PVS-Studio
supports this option - see the settings command "Check only Files Modified In".
Set the static analyzer to run every night on the build server
Regardless of whether or not you use incremental analysis on every developer's computer, a very useful
practice is to perform a complete run of the analyzer on the whole code base every night. A highly
important capability of the tool is therefore the capability of command line launch, which is of course
supported in PVS-Studio.
The more practices you follow, the greater effect you get
Let's once again enumerate the tips on how to enhance efficiency of static code analysis:
1. Mark false positives to get fewer messages to study the next time.
2. Use incremental analysis (automated check of freshly recompiled files).
3. Check files modified in the last several days.
4. Set the static analyzer to run every night on the build server.
If you follow all the four recommendations, you'll get the highest payback from investing into static
analysis tools. Of course, it sometimes cannot be achieved due to various reasons, but you should
certainly strive for it.

More Related Content

What's hot

Some Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingSome Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingKumari Warsha Goel
 
Muwanika rogers (software testing) muni university
Muwanika rogers (software testing) muni universityMuwanika rogers (software testing) muni university
Muwanika rogers (software testing) muni universityrogers muwanika
 
Testing a GPS application | Testbytes
Testing a GPS application | TestbytesTesting a GPS application | Testbytes
Testing a GPS application | TestbytesTestbytes
 
How we test the code analyzer
How we test the code analyzerHow we test the code analyzer
How we test the code analyzerPVS-Studio
 
SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4  SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4 Mohammad Faizan
 
Tech talks annual 2015 izzet mustafayev_performance testing - the way to make...
Tech talks annual 2015 izzet mustafayev_performance testing - the way to make...Tech talks annual 2015 izzet mustafayev_performance testing - the way to make...
Tech talks annual 2015 izzet mustafayev_performance testing - the way to make...TechTalks
 
FADHILLA ELITA Ppt Chapter 1
FADHILLA ELITA Ppt Chapter 1FADHILLA ELITA Ppt Chapter 1
FADHILLA ELITA Ppt Chapter 1fadhilla elita
 
Effective Testing fo Startups
Effective Testing fo StartupsEffective Testing fo Startups
Effective Testing fo StartupsTestnetic
 
Lesson 4...Bug Life Cycle
Lesson 4...Bug Life CycleLesson 4...Bug Life Cycle
Lesson 4...Bug Life Cyclebhushan Nehete
 
Eclipse Day India 2015 - Eclipse RCP testing using Jubula based automation
Eclipse Day India 2015 - Eclipse RCP testing using Jubula based automationEclipse Day India 2015 - Eclipse RCP testing using Jubula based automation
Eclipse Day India 2015 - Eclipse RCP testing using Jubula based automationEclipse Day India
 
Test driven development
Test driven developmentTest driven development
Test driven developmentnamkha87
 
Myth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan Lipps
Myth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan LippsMyth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan Lipps
Myth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan LippsApplitools
 
Manual testing interview questions and answers
Manual testing interview questions and answersManual testing interview questions and answers
Manual testing interview questions and answersTestbytes
 

What's hot (20)

Some Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingSome Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software Testing
 
Principles of software testing
Principles of software testingPrinciples of software testing
Principles of software testing
 
7 testing principles
7 testing principles7 testing principles
7 testing principles
 
Formal method
Formal methodFormal method
Formal method
 
Muwanika rogers (software testing) muni university
Muwanika rogers (software testing) muni universityMuwanika rogers (software testing) muni university
Muwanika rogers (software testing) muni university
 
TestDrivenDeveloment
TestDrivenDevelomentTestDrivenDeveloment
TestDrivenDeveloment
 
Testing a GPS application | Testbytes
Testing a GPS application | TestbytesTesting a GPS application | Testbytes
Testing a GPS application | Testbytes
 
How we test the code analyzer
How we test the code analyzerHow we test the code analyzer
How we test the code analyzer
 
SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4  SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4
 
Tech talks annual 2015 izzet mustafayev_performance testing - the way to make...
Tech talks annual 2015 izzet mustafayev_performance testing - the way to make...Tech talks annual 2015 izzet mustafayev_performance testing - the way to make...
Tech talks annual 2015 izzet mustafayev_performance testing - the way to make...
 
FADHILLA ELITA Ppt Chapter 1
FADHILLA ELITA Ppt Chapter 1FADHILLA ELITA Ppt Chapter 1
FADHILLA ELITA Ppt Chapter 1
 
Software testing methods
Software testing methodsSoftware testing methods
Software testing methods
 
Effective Testing fo Startups
Effective Testing fo StartupsEffective Testing fo Startups
Effective Testing fo Startups
 
Lesson 4...Bug Life Cycle
Lesson 4...Bug Life CycleLesson 4...Bug Life Cycle
Lesson 4...Bug Life Cycle
 
Unit Testing
Unit TestingUnit Testing
Unit Testing
 
Eclipse Day India 2015 - Eclipse RCP testing using Jubula based automation
Eclipse Day India 2015 - Eclipse RCP testing using Jubula based automationEclipse Day India 2015 - Eclipse RCP testing using Jubula based automation
Eclipse Day India 2015 - Eclipse RCP testing using Jubula based automation
 
Test driven development
Test driven developmentTest driven development
Test driven development
 
Myth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan Lipps
Myth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan LippsMyth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan Lipps
Myth vs Reality: Understanding AI/ML for QA Automation - w/ Jonathan Lipps
 
Manual testing interview questions and answers
Manual testing interview questions and answersManual testing interview questions and answers
Manual testing interview questions and answers
 
Automated testing web application
Automated testing web applicationAutomated testing web application
Automated testing web application
 

Viewers also liked

Designing Teams for Emerging Challenges
Designing Teams for Emerging ChallengesDesigning Teams for Emerging Challenges
Designing Teams for Emerging ChallengesAaron Irizarry
 
UX, ethnography and possibilities: for Libraries, Museums and Archives
UX, ethnography and possibilities: for Libraries, Museums and ArchivesUX, ethnography and possibilities: for Libraries, Museums and Archives
UX, ethnography and possibilities: for Libraries, Museums and ArchivesNed Potter
 
Study: The Future of VR, AR and Self-Driving Cars
Study: The Future of VR, AR and Self-Driving CarsStudy: The Future of VR, AR and Self-Driving Cars
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
 
Visual Design with Data
Visual Design with DataVisual Design with Data
Visual Design with DataSeth Familian
 
Hype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerHype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerLuminary Labs
 
3 Things Every Sales Team Needs to Be Thinking About in 2017
3 Things Every Sales Team Needs to Be Thinking About in 20173 Things Every Sales Team Needs to Be Thinking About in 2017
3 Things Every Sales Team Needs to Be Thinking About in 2017Drift
 
How to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your NicheHow to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your NicheLeslie Samuel
 

Viewers also liked (7)

Designing Teams for Emerging Challenges
Designing Teams for Emerging ChallengesDesigning Teams for Emerging Challenges
Designing Teams for Emerging Challenges
 
UX, ethnography and possibilities: for Libraries, Museums and Archives
UX, ethnography and possibilities: for Libraries, Museums and ArchivesUX, ethnography and possibilities: for Libraries, Museums and Archives
UX, ethnography and possibilities: for Libraries, Museums and Archives
 
Study: The Future of VR, AR and Self-Driving Cars
Study: The Future of VR, AR and Self-Driving CarsStudy: The Future of VR, AR and Self-Driving Cars
Study: The Future of VR, AR and Self-Driving Cars
 
Visual Design with Data
Visual Design with DataVisual Design with Data
Visual Design with Data
 
Hype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerHype vs. Reality: The AI Explainer
Hype vs. Reality: The AI Explainer
 
3 Things Every Sales Team Needs to Be Thinking About in 2017
3 Things Every Sales Team Needs to Be Thinking About in 20173 Things Every Sales Team Needs to Be Thinking About in 2017
3 Things Every Sales Team Needs to Be Thinking About in 2017
 
How to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your NicheHow to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your Niche
 

Similar to Static analysis is most efficient when being used regularly. We'll tell you why...

An ideal static analyzer, or why ideals are unachievable
An ideal static analyzer, or why ideals are unachievableAn ideal static analyzer, or why ideals are unachievable
An ideal static analyzer, or why ideals are unachievablePVS-Studio
 
0136 ideal static_analyzer
0136 ideal static_analyzer0136 ideal static_analyzer
0136 ideal static_analyzerPVS-Studio
 
Three Interviews About Static Code Analyzers
Three Interviews About Static Code AnalyzersThree Interviews About Static Code Analyzers
Three Interviews About Static Code AnalyzersAndrey Karpov
 
Difficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usabilityDifficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usabilityAndrey Karpov
 
Difficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usabilityDifficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usabilityPVS-Studio
 
Difficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usabilityDifficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usabilityPVS-Studio
 
What do static analysis and search engines have in common? A good "top"!
What do static analysis and search engines have in common? A good "top"!What do static analysis and search engines have in common? A good "top"!
What do static analysis and search engines have in common? A good "top"!PVS-Studio
 
Static analysis and ROI
Static analysis and ROIStatic analysis and ROI
Static analysis and ROIPVS-Studio
 
Static analysis and ROI
Static analysis and ROIStatic analysis and ROI
Static analysis and ROIAndrey Karpov
 
PVS-Studio advertisement - static analysis of C/C++ code
PVS-Studio advertisement - static analysis of C/C++ codePVS-Studio advertisement - static analysis of C/C++ code
PVS-Studio advertisement - static analysis of C/C++ codePVS-Studio
 
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Applitools
 
Problems of testing 64-bit applications
Problems of testing 64-bit applicationsProblems of testing 64-bit applications
Problems of testing 64-bit applicationsPVS-Studio
 
PVS-Studio confesses its love for Linux
PVS-Studio confesses its love for LinuxPVS-Studio confesses its love for Linux
PVS-Studio confesses its love for LinuxPVS-Studio
 
War of the Machines: PVS-Studio vs. TensorFlow
War of the Machines: PVS-Studio vs. TensorFlowWar of the Machines: PVS-Studio vs. TensorFlow
War of the Machines: PVS-Studio vs. TensorFlowPVS-Studio
 
PVS-Studio for Visual C++
PVS-Studio for Visual C++PVS-Studio for Visual C++
PVS-Studio for Visual C++PVS-Studio
 
PVS-Studio for Visual C++
PVS-Studio for Visual C++PVS-Studio for Visual C++
PVS-Studio for Visual C++Andrey Karpov
 
PVS-Studio Has Finally Got to Boost
PVS-Studio Has Finally Got to BoostPVS-Studio Has Finally Got to Boost
PVS-Studio Has Finally Got to BoostAndrey Karpov
 
Testing parallel programs
Testing parallel programsTesting parallel programs
Testing parallel programsPVS-Studio
 
New Year PVS-Studio 6.00 Release: Scanning Roslyn
New Year PVS-Studio 6.00 Release: Scanning RoslynNew Year PVS-Studio 6.00 Release: Scanning Roslyn
New Year PVS-Studio 6.00 Release: Scanning RoslynPVS-Studio
 
PVS-Studio's New Message Suppression Mechanism
PVS-Studio's New Message Suppression MechanismPVS-Studio's New Message Suppression Mechanism
PVS-Studio's New Message Suppression MechanismAndrey Karpov
 

Similar to Static analysis is most efficient when being used regularly. We'll tell you why... (20)

An ideal static analyzer, or why ideals are unachievable
An ideal static analyzer, or why ideals are unachievableAn ideal static analyzer, or why ideals are unachievable
An ideal static analyzer, or why ideals are unachievable
 
0136 ideal static_analyzer
0136 ideal static_analyzer0136 ideal static_analyzer
0136 ideal static_analyzer
 
Three Interviews About Static Code Analyzers
Three Interviews About Static Code AnalyzersThree Interviews About Static Code Analyzers
Three Interviews About Static Code Analyzers
 
Difficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usabilityDifficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usability
 
Difficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usabilityDifficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usability
 
Difficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usabilityDifficulties of comparing code analyzers, or don't forget about usability
Difficulties of comparing code analyzers, or don't forget about usability
 
What do static analysis and search engines have in common? A good "top"!
What do static analysis and search engines have in common? A good "top"!What do static analysis and search engines have in common? A good "top"!
What do static analysis and search engines have in common? A good "top"!
 
Static analysis and ROI
Static analysis and ROIStatic analysis and ROI
Static analysis and ROI
 
Static analysis and ROI
Static analysis and ROIStatic analysis and ROI
Static analysis and ROI
 
PVS-Studio advertisement - static analysis of C/C++ code
PVS-Studio advertisement - static analysis of C/C++ codePVS-Studio advertisement - static analysis of C/C++ code
PVS-Studio advertisement - static analysis of C/C++ code
 
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
 
Problems of testing 64-bit applications
Problems of testing 64-bit applicationsProblems of testing 64-bit applications
Problems of testing 64-bit applications
 
PVS-Studio confesses its love for Linux
PVS-Studio confesses its love for LinuxPVS-Studio confesses its love for Linux
PVS-Studio confesses its love for Linux
 
War of the Machines: PVS-Studio vs. TensorFlow
War of the Machines: PVS-Studio vs. TensorFlowWar of the Machines: PVS-Studio vs. TensorFlow
War of the Machines: PVS-Studio vs. TensorFlow
 
PVS-Studio for Visual C++
PVS-Studio for Visual C++PVS-Studio for Visual C++
PVS-Studio for Visual C++
 
PVS-Studio for Visual C++
PVS-Studio for Visual C++PVS-Studio for Visual C++
PVS-Studio for Visual C++
 
PVS-Studio Has Finally Got to Boost
PVS-Studio Has Finally Got to BoostPVS-Studio Has Finally Got to Boost
PVS-Studio Has Finally Got to Boost
 
Testing parallel programs
Testing parallel programsTesting parallel programs
Testing parallel programs
 
New Year PVS-Studio 6.00 Release: Scanning Roslyn
New Year PVS-Studio 6.00 Release: Scanning RoslynNew Year PVS-Studio 6.00 Release: Scanning Roslyn
New Year PVS-Studio 6.00 Release: Scanning Roslyn
 
PVS-Studio's New Message Suppression Mechanism
PVS-Studio's New Message Suppression MechanismPVS-Studio's New Message Suppression Mechanism
PVS-Studio's New Message Suppression Mechanism
 

Recently uploaded

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 

Recently uploaded (20)

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 

Static analysis is most efficient when being used regularly. We'll tell you why...

  • 1. Static analysis is most efficient when being used regularly. We'll tell you why... Author: Evgeniy Ryzhkov Date: 22.04.2013 Some of our users run static analysis only occasionally. They find new errors in their code and, feeling glad about this, willingly renew PVS-Studio licenses. I should feel glad too, shouldn't I? But I feel sad - because you get only 10-20% of the tool's efficiency when using it in such a way, while you could obtain at least 80-90% if you used it otherwise. In this post I will tell you about the most common mistake among users of static code analysis tools. Discovering static analysis Let's at first discuss the simplest scenario which is the most common with those who try the static code analysis technology for the first time. Some member of a developer team once came across an article, or a conference lecture, or even an advertisement of some static analyzer and decided to try it on his/her project. I'm not telling about PVS-Studio in particular - it might be any static analysis tool. Now the programmer, more or less easily, deploys a static analysis system and launches code analysis. The following consequences are possible: • The tool fails to work. It doesn't pick up the project settings, or gathers the environment parameters incorrectly, or fails in any other way. The user naturally grows less inclined to trust it. • The tool performs the check successfully and generates some diagnostic messages. The user studies them and finds them irrelevant. It doesn't mean that the tool is absolutely poor; it just has failed to demonstrate its strong points on this particular project. Perhaps it should be given a chance with another one. • The tool generates a few relevant messages (among others) which obviously indicate that genuine bugs are present in the code. Strictly speaking, it's in the third case, when something real is found, that the team starts using a tool in practice. The biggest mistake you can make when adopting static analysis But one may make a great mistake at this point when integrating static analysis into the development process: namely, you may accept a practice to run static analysis before every release, for instance. Or run it once a month. Let's see why such an approach is bad: First, since you don't use the false positive suppression mechanism, you will see the same old false positives again and again. And therefore you will have to waste time to investigate them. The more messages the analyzer generates, the less focused the programmer is. Second, diagnostic messages are generated even for the code which you didn't touch between the checks. This means you'll get even more messages to examine.
  • 2. Third, and most important, with such an approach you won't get the static analyzer to find all those errors you were catching through other methods for so long and with so much sadness between two checks. This thing is very important, and I want to discuss it in detail. It should be done also because it is the thing people forget about when estimating the usefulness of static analysis. See the next section. A fallacy: "Efficiency of static analysis can be estimated by comparing analysis results for the last year's code base release and the current one" Some programmers suggest using this method to estimate efficiency of static code analysis. Imagine a team working on some project for several years. It keeps all the project release versions (1.0, 1.1., 1.2, etc.). It is suggested that they get the latest version of some code analyzer and run it on the last year's project source codes - say, version 1.3. Then the same version of the static analyzer should be run on the latest code base release - let it be version 1.7. After that we get two reports by the analyzer. We study the first report to find out that the older project contains 50 genuine errors. Then we study the second report and see that the latest project contains 20 bugs out of those 50 ones (and some new ones, of course). It means that 50-20 = 30 bugs have been fixed through alternative methods without using the analyzer. These errors could have been found, for instance, through manual testing, or by users when working with the release version, or otherwise. We draw a conclusion that the static analyzer could have helped to quickly detect and fix 30 errors. If this number is pretty large for the project and developers' time is expensive, we may estimate economic efficiency of purchasing and using the static code analyzer. This approach to economic efficiency estimation is absolutely incorrect! You cannot use it to evaluate a static analyzer! The reason is that you make several errors at once when trying to do that. First, you don't take into account those bugs which have been added into version 1.4 of the code base and eliminated in version 1.6. You may argue: "Then we should compare two releases in succession, for example 1.4 and 1.5!". But it is wrong too, since you don't take account of errors which appeared after release 1.4 and were fixed before release 1.5. Second, code base release versions are in themselves already debugged and contain few bugs - unlike the current version the developers are working on. I believe the release wasn't as buggy and crash- prone, was it? You surely fix bugs between releases, but you detect them through other methods which are naturally more expensive. Here we should remind you of the table demonstrating the dependency of the cost of bug fixes on the time they were added into the code and detected. You should understand that running static analysis only "before a release" automatically increases the cost of bug fixes. Thus, you cannot truly evaluate efficiency of static analysis by simply running it on the last year's code base release and the current one and comparing the results. Tips on how to obtain the maximum benefit when adopting static analysis During the time we have been working in the field of static code analysis, we have worked out several practices to obtain the maximum benefit from using static analysis tools. Although I will specify how these mechanisms are supported in PVS-Studio, the tips can be used with any other static analysis tool.
  • 3. Mark false positives to get fewer messages to study the next time Any static code analyzer generates false positives. This is the nature of the tool and it can't be helped. Of course, everybody tries to reduce the number of false positives, but you can't make it zero. In order not to get the same false positives again and again, a good tool provides you with a mechanism to suppress them. You can simply mark a message with "false positive" and thus tell the analyzer not to generate it at the next check. In PVS-Studio, it is the "Mark As False Alarm" function responsible for this. See the documentation section Suppression of false alarms for details. Despite being very simple, this recommendation may help you to save much time. Moreover, you will stay more focused when you have fewer messages to examine. Use incremental analysis (automated check of freshly recompiled files) The effective way of handling incremental analysis is to integrate it into IDE for the tool to be able to be launched automatically when compiling freshly modified files. Ideally, incremental analysis should be run by all the developers currently working on the code base on their computers. In this case, many bugs will be detected and fixed before the code gets into the version control system. This practice greatly reduces the "cost of bug fixes". PVS-Studio supports the incremental analysis mode. Check files modified in the last several days If you for some reason cannot install the analyzer on all the developers' computers, you may check the code once in several days. In order not to get a pile of messages referring to old files, static analysis tools provide the option "check files modified in the last N days". You can set it to 3 days, for example. Although from the technical viewpoint nothing prevents you from setting this parameter to any number (say, 7 or 10 days), we don't recommend you to do that. When you check the code just once in 10 days, you repeat the "occasional use of the tool" mistake. You see, if a bug is added today, found by testers tomorrow, described in the bug-tracker the day after tomorrow, and fixed in 5 days, running analysis once in 10 days will be useless. But the capability of checking the code once in two or three days may appear very useful. PVS-Studio supports this option - see the settings command "Check only Files Modified In". Set the static analyzer to run every night on the build server Regardless of whether or not you use incremental analysis on every developer's computer, a very useful practice is to perform a complete run of the analyzer on the whole code base every night. A highly important capability of the tool is therefore the capability of command line launch, which is of course supported in PVS-Studio. The more practices you follow, the greater effect you get Let's once again enumerate the tips on how to enhance efficiency of static code analysis: 1. Mark false positives to get fewer messages to study the next time. 2. Use incremental analysis (automated check of freshly recompiled files). 3. Check files modified in the last several days. 4. Set the static analyzer to run every night on the build server. If you follow all the four recommendations, you'll get the highest payback from investing into static analysis tools. Of course, it sometimes cannot be achieved due to various reasons, but you should certainly strive for it.