Users' desire to compare different code analyzers is natural and understandable. However, it's not so easy to fulfill this desire as it may seem at first sight. The point is that you don't know what particular factors must be compared.
PVS-Studio's New Message Suppression MechanismAndrey Karpov
The PVS-Studio analyzer already has a false positive suppression mechanism, and it completely suits us
when its functionality is concerned, i.e. we have no complaints about its reliability. However, some of
our customers would like to work with the messages generated by the analyzer only for new, i.e. freshly
written, code. And we can understand why they want it, since we know that the analyzer generates
thousands or even dozens of thousands of messages for the existing source code in a large-scale project
and surely no one would feel like fixing all of them.
The
We continue checking Microsoft projects: analysis of PowerShellPVS-Studio
It has become a "good tradition" for Microsoft to make their products open-source: CoreFX, .Net Compiler Platform (Roslyn), Code Contracts, MSBuild, and other projects. For us, the developers of PVS-Studio analyzer, it's an opportunity to check well-known projects, tell people (including the project authors themselves) about the bugs we find, and additionally test our analyzer. Today we are going to talk about the errors found in another project by Microsoft, PowerShell.
Espressif IoT Development Framework: 71 Shots in the FootAndrey Karpov
The article summarizes the author's analysis of errors found in the Espressif IoT Development Framework using the PVS-Studio static analyzer. The analyzer found 71 errors in the framework code related to security vulnerabilities like incorrect argument order, loss of significant bits, and failure to clear private data from memory. The author notes that additional errors may be found with a more complete analysis. Conditional compilation directives and macros used in the framework code generated many false positives from the analyzer.
How the PVS-Studio analyzer began to find even more errors in Unity projectsAndrey Karpov
When developing the PVS-Studio static analyzer, we try to develop it in various directions. Thus, our team is working on plugins for the IDE (Visual Studio, Rider), improving integration with CI, and so on. Increasing the efficiency of project analysis under Unity is also one of our priority goals. We believe that static analysis will allow programmers using this game engine to improve the quality of their source code and simplify work on any projects. Therefore, we would like to increase the popularity of PVS-Studio among companies that develop under Unity. One of the first steps in implementing this idea was to write annotations for the methods defined in the engine. This allows a developer to control the correctness of the code related to calls of annotated methods.
This document provides an overview of using the PVS-Studio static code analysis tool for Visual C++ projects in Visual Studio. It describes how to install and configure PVS-Studio, analyze a project, work with diagnostic messages, use the incremental analysis feature to check for errors as code is written, and suppress false positives. The tool integrates directly into Visual Studio and can detect many types of errors like typos, logic errors, and security issues.
An Ideal Way to Integrate a Static Code Analyzer into a ProjectPVS-Studio
One of the most difficult things about using static analysis tools is managing false positives. There are a number of ways to eliminate them using the analyzer's settings or changing the code itself. I took a small project Apple II emulator for Windows as an example to show you how you can handle PVS-Studio's analysis report, and demonstrate by a number of examples how to fix errors and suppress false positives.
Static analysis is most efficient when being used regularly. We'll tell you w...Andrey Karpov
Some of our users run static analysis only occasionally. They find new errors in their code and, feeling glad about this, willingly renew PVS-Studio licenses. I should feel glad too, shouldn't I? But I feel sad - because you get only 10-20% of the tool's efficiency when using it in such a way, while you could obtain at least 80-90% if you used it otherwise. In this post I will tell you about the most common mistake among users of static code analysis tools.
Hartmut Kaiser evaluates his experience using the static analysis tool PVS-Studio to analyze the HPX C++ library source code. PVS-Studio found several issues, including an unused variable, an incorrect return type, and a missing copy constructor. Integrating PVS-Studio into continuous integration was seen as very useful. While the tool caught real problems, it also produced some false positives that could be suppressed. Overall the analysis was seen as valuable for finding subtle bugs.
PVS-Studio's New Message Suppression MechanismAndrey Karpov
The PVS-Studio analyzer already has a false positive suppression mechanism, and it completely suits us
when its functionality is concerned, i.e. we have no complaints about its reliability. However, some of
our customers would like to work with the messages generated by the analyzer only for new, i.e. freshly
written, code. And we can understand why they want it, since we know that the analyzer generates
thousands or even dozens of thousands of messages for the existing source code in a large-scale project
and surely no one would feel like fixing all of them.
The
We continue checking Microsoft projects: analysis of PowerShellPVS-Studio
It has become a "good tradition" for Microsoft to make their products open-source: CoreFX, .Net Compiler Platform (Roslyn), Code Contracts, MSBuild, and other projects. For us, the developers of PVS-Studio analyzer, it's an opportunity to check well-known projects, tell people (including the project authors themselves) about the bugs we find, and additionally test our analyzer. Today we are going to talk about the errors found in another project by Microsoft, PowerShell.
Espressif IoT Development Framework: 71 Shots in the FootAndrey Karpov
The article summarizes the author's analysis of errors found in the Espressif IoT Development Framework using the PVS-Studio static analyzer. The analyzer found 71 errors in the framework code related to security vulnerabilities like incorrect argument order, loss of significant bits, and failure to clear private data from memory. The author notes that additional errors may be found with a more complete analysis. Conditional compilation directives and macros used in the framework code generated many false positives from the analyzer.
How the PVS-Studio analyzer began to find even more errors in Unity projectsAndrey Karpov
When developing the PVS-Studio static analyzer, we try to develop it in various directions. Thus, our team is working on plugins for the IDE (Visual Studio, Rider), improving integration with CI, and so on. Increasing the efficiency of project analysis under Unity is also one of our priority goals. We believe that static analysis will allow programmers using this game engine to improve the quality of their source code and simplify work on any projects. Therefore, we would like to increase the popularity of PVS-Studio among companies that develop under Unity. One of the first steps in implementing this idea was to write annotations for the methods defined in the engine. This allows a developer to control the correctness of the code related to calls of annotated methods.
This document provides an overview of using the PVS-Studio static code analysis tool for Visual C++ projects in Visual Studio. It describes how to install and configure PVS-Studio, analyze a project, work with diagnostic messages, use the incremental analysis feature to check for errors as code is written, and suppress false positives. The tool integrates directly into Visual Studio and can detect many types of errors like typos, logic errors, and security issues.
An Ideal Way to Integrate a Static Code Analyzer into a ProjectPVS-Studio
One of the most difficult things about using static analysis tools is managing false positives. There are a number of ways to eliminate them using the analyzer's settings or changing the code itself. I took a small project Apple II emulator for Windows as an example to show you how you can handle PVS-Studio's analysis report, and demonstrate by a number of examples how to fix errors and suppress false positives.
Static analysis is most efficient when being used regularly. We'll tell you w...Andrey Karpov
Some of our users run static analysis only occasionally. They find new errors in their code and, feeling glad about this, willingly renew PVS-Studio licenses. I should feel glad too, shouldn't I? But I feel sad - because you get only 10-20% of the tool's efficiency when using it in such a way, while you could obtain at least 80-90% if you used it otherwise. In this post I will tell you about the most common mistake among users of static code analysis tools.
Hartmut Kaiser evaluates his experience using the static analysis tool PVS-Studio to analyze the HPX C++ library source code. PVS-Studio found several issues, including an unused variable, an incorrect return type, and a missing copy constructor. Integrating PVS-Studio into continuous integration was seen as very useful. While the tool caught real problems, it also produced some false positives that could be suppressed. Overall the analysis was seen as valuable for finding subtle bugs.
Cyber Defense Forensic Analyst - Real World Hands-on ExamplesSandeep Kumar Seeram
The document discusses analyzing malware using static and dynamic analysis techniques. Static analysis involves examining a malware file's code and structure without executing it, using tools like disassemblers and string extractors. Dynamic analysis executes malware in a controlled environment to observe its behaviors and any changes it makes. The document then demonstrates analyzing the "Netflix Account Generator" malware using an isolated cloud sandbox, where it is observed starting child processes and making outbound network connections, suggesting it is a remote access trojan.
Analysis of PascalABC.NET using SonarQube plugins: SonarC# and PVS-StudioPVS-Studio
In November 2016, we posted an article about the development and use of the PVS-Studio plugin for SonarQube. We received great feedback from our customers and interested users who requested testing the plugin on a real project. As the interest in this subject is not decreasing, we decided to test the plugin on a C# project PascalABC.NET. Also, it should be borne in mind, that SonarQube have their own static analyzer of C# code - SonarC#. To make the report more complete, we decided to test SonarC# as well. The objective of this work was not the comparison of the analyzers, but the demonstration of the main peculiarities of their interaction with the SonarQube service. Plain comparison of the analyzers would not be fair due to the fact that PVS-Studio is a specialized tool for bug detection and potential vulnerabilities, while SonarQube is a service for the assessment of the code quality by a large number of parameters: code duplication, compliance with the code standards, unit tests coverage, potential bugs in the code, density of comments in the code, technical debt and so on.
Lesson 7. The issues of detecting 64-bit errorsPVS-Studio
There are various techniques of detecting errors in program code. Let us consider the most popular ones and see how efficient they are in finding 64-bit errors.
Testing parallel software is a more complicated task in comparison to testing a standard program. The programmer should be aware both of the traps he can face while testing parallel code and existing methodologies and toolkit.
Use of Cell Block As An Indent Space In PythonWaqas Tariq
The document proposes using cell blocks in spreadsheets to visualize Python source code indentation. It introduces the Stereopsis algorithm to analyze source code indentation using two views - left eye and right eye. This helps identify inconsistencies in indentation. Cell blocks are used to represent indentation levels and colored cell blocks provide an additional visual cue. The approach aims to help programmers easily identify indentation errors without compiling code. Sample Python code is analyzed using the proposed approach to demonstrate how indentation errors can be detected.
Production Debugging at Code Camp PhillyBrian Lyttle
This document provides an introduction to production debugging techniques. It discusses monitoring tools like Task Manager and Performance Monitor, debugging fundamentals like stack traces and crash dumps, protocol analysis, and remote debugging. The goal is to help developers effectively debug problems in production environments using tools that don't require a development workstation.
An important event has taken place in the PVS-Studio analyzer's life: support of C#-code analysis was added in the latest version. As one of its developers, I couldn't but try it on some project. Reading about scanning small and little-known projects is not much interesting of course, so it had to be something popular, and I picked MonoDevelop.
This lab document describes using the Metasploit framework to perform exploits against Windows systems. It consists of six sections: installing Metasploit, adding a remote user to Windows XP, gaining remote command shell access to Windows XP, using DLL injection to open a remote VNC connection, remotely installing a rootkit on Windows, and setting up the Metasploit web interface. The document provides background on exploit frameworks and payloads, and guides students through exercises to complete each section.
Source code recovery is one of the most tedious, and interesting, tasks in reverse engineering. During the course of this talk, the author will talk about a tool being developed (on and off) since last year that aims to generate auto-compilable source code from binaries. The tool is currently working though it needs a lot more work.
This information sheet tells you about the static code analyzer PVS-Studio. PVS-Studio is a tool for bug detection in the source code of programs, written in C, C++ and C#. It works in Windows and Linux environment.
As a PVS-Studio's developer, I am often asked to implement various new diagnostics in our tool. Many of these requests are based on users' experience of working with dynamic code analyzers, for example Valgrind. Unfortunately, it is usually impossible or hardly possible for us to implement such diagnostics. In this article, I'm going to explain briefly why static code analyzers cannot do what dynamic analyzers can and vice versa. Each of these analysis methodologies has its own pros and cons; and one cannot replace the other, but they do complement each other very well.
Detection of vulnerabilities in programs with the help of code analyzersPVS-Studio
Static code analysis tools can help detect vulnerabilities by analyzing source code without executing the program. This document describes 16 such tools, including BOON for buffer overflows, CQual for format string vulnerabilities, MOPS for checking rule compliance, and ITS4, RATS, PScan, and Flawfinder for buffer overflows and format strings. While useful, static tools have limitations and cannot guarantee to find all vulnerabilities. Manual review is still needed to verify results.
This document discusses attacking and exploiting antivirus software. It begins by describing how antivirus engines work and how their functionality can increase vulnerabilities. The document then details initial experiments fuzzing 14 antivirus engines, finding remote and local vulnerabilities. Specific vulnerabilities are listed for various antivirus products. Statistics on fuzzing various engines are provided. The document concludes by discussing remote exploitation of antivirus engines, noting that despite ASLR, many engines still have exploitable issues due to non-ASLR modules or RWX pages. The emulators used by antivirus engines are highlighted as a key part that can bypass some protections.
The document discusses inconsistent changes to code clones at the release level by analyzing two subject systems over multiple releases to detect clones, track clone groups between releases, and identify inconsistent changes in clone groups. It aims to observe the effects of inconsistent changes to clones at the release level since previous work has mainly analyzed inconsistent changes at the revision level.
Our team wrote three articles related to the code analysis of Tizen operating system. The operating system contains a lot of code, so this is the reason why it is a fertile ground for writing different articles. I think that we will go back again to Tizen in future, but right now other interesting projects are waiting for us. So, I will sum up some results of the work done and answer a number of questions that have arisen after the previously published articles.
Creation of a Test Bed Environment for Core Java Applications using White Box...cscpconf
A Test Bed Environment allows for rigorous, transparent, and replicable testing of scientific
theories. However, in software development these test beds can be specified hardware and
software environment for the application under test. Though the existing open source test bed
environments in Integrated Development Environment (IDE)s are capable of supporting the
development of Java application types, test reports are generated by third party developers.
They do not enhance the utility and the performance of the system constructed. Our proposed
system, we have created a customized test bed environment for core java application programs
used to generate the test case report using generated control flow graph. This can be obtained by developing a new mini compiler with additional features.
On the Use of Static Analysis to Safeguard Recursive Dependency ResolutionKamil Jezek
Modern software systems are not developed from
scratch – they rely heavily on the reuse of functionality provided
by libraries. Selecting the right libraries remains a challenging
task. What is more, libraries themselves often depend on other
libraries. Managing these transitive dependencies on libraries is
risky. In this paper, we describe the problems that can occur when
transitive dependencies are resolved automatically using examples
from real-world programs. We then present an empirical study
to assess the extent of the problem when the popular Maven tool
is used, and propose an approach based on static type checking
that can capture many of the problems described at build tim
Understanding Log Lines using Development KnowledgeSAIL_QU
1. Practitioners have challenges understanding log lines and often ask experts for help.
2. The study examined real-life log inquiries from user mailing lists and found they typically ask about the meaning, cause, context, solution or impact.
3. The approach proposes attaching development knowledge like code, commits, issues and comments to logs to help answer these common questions and resolve inquiries.
The document discusses reversing Microsoft patches to reveal vulnerable code. It describes taking a binary difference of files before and after a patch is applied to identify code changes and potential vulnerabilities. This process can be used to create better vulnerability signatures compared to exploit signatures. However, there are challenges to the process like obtaining the correct file versions to compare and dealing with compiler optimizations. Dynamic analysis by setting breakpoints in changed code is also described to help locate where user input is handled to potentially exploit vulnerabilities. The goal is to reveal vulnerable code details to help create vulnerability signatures and verify patches.
At some moment, long ago, we somehow started to cover in our articles any subject but the PVS-Studio tool itself. We told you about the projects we checked and the C++ language's subtle details; we told you how to create plugins in C# or how to launch PVS-Studio from the command line... But PVS-Studio is first of all meant for developers working in Visual Studio. We've done quite a lot to make it easier and more comfortable for them to use our tool. Yet this particular aspect usually stays off screen. Now I decided to improve that and tell you about the PVS-Studio plugin from scratch. If you are a Visual C++ user, this article is for you.
Comparing static analysis in Visual Studio 2012 (Visual C++ 2012) and PVS-StudioPVS-Studio
After Visual Studio 2012 was released with a new static analysis unit included in all of the product's editions, a natural question arises: "Is PVS-Studio still relevant as a static analysis tool or can it be replaced by the tool integrated into VS?". A detailed answer with examples is given in this article. We have performed interface and usability comparison as well as a comparison of error diagnosis strength in real software code. The comparison was carried out on the source code of three open-source projects by id Software: Doom 3, Quake 3: Arena, Wolfenstein: Enemy Territory.
If the coding bug is banal, it doesn't meant it's not crucialPVS-Studio
Spreading the word about PVS-Studio static analyzer, we usually write articles for programmers. However, some things are seen by programmers quite one-sided. That is why there are project managers who can help manage the process of the project development and guide it to the right direction. I decided to write a series of articles, whose target audience is project managers. These articles will help better understand the use of static code analysis methodology. Today we are going to consider a false postulate: "coding errors are insignificant".
Cyber Defense Forensic Analyst - Real World Hands-on ExamplesSandeep Kumar Seeram
The document discusses analyzing malware using static and dynamic analysis techniques. Static analysis involves examining a malware file's code and structure without executing it, using tools like disassemblers and string extractors. Dynamic analysis executes malware in a controlled environment to observe its behaviors and any changes it makes. The document then demonstrates analyzing the "Netflix Account Generator" malware using an isolated cloud sandbox, where it is observed starting child processes and making outbound network connections, suggesting it is a remote access trojan.
Analysis of PascalABC.NET using SonarQube plugins: SonarC# and PVS-StudioPVS-Studio
In November 2016, we posted an article about the development and use of the PVS-Studio plugin for SonarQube. We received great feedback from our customers and interested users who requested testing the plugin on a real project. As the interest in this subject is not decreasing, we decided to test the plugin on a C# project PascalABC.NET. Also, it should be borne in mind, that SonarQube have their own static analyzer of C# code - SonarC#. To make the report more complete, we decided to test SonarC# as well. The objective of this work was not the comparison of the analyzers, but the demonstration of the main peculiarities of their interaction with the SonarQube service. Plain comparison of the analyzers would not be fair due to the fact that PVS-Studio is a specialized tool for bug detection and potential vulnerabilities, while SonarQube is a service for the assessment of the code quality by a large number of parameters: code duplication, compliance with the code standards, unit tests coverage, potential bugs in the code, density of comments in the code, technical debt and so on.
Lesson 7. The issues of detecting 64-bit errorsPVS-Studio
There are various techniques of detecting errors in program code. Let us consider the most popular ones and see how efficient they are in finding 64-bit errors.
Testing parallel software is a more complicated task in comparison to testing a standard program. The programmer should be aware both of the traps he can face while testing parallel code and existing methodologies and toolkit.
Use of Cell Block As An Indent Space In PythonWaqas Tariq
The document proposes using cell blocks in spreadsheets to visualize Python source code indentation. It introduces the Stereopsis algorithm to analyze source code indentation using two views - left eye and right eye. This helps identify inconsistencies in indentation. Cell blocks are used to represent indentation levels and colored cell blocks provide an additional visual cue. The approach aims to help programmers easily identify indentation errors without compiling code. Sample Python code is analyzed using the proposed approach to demonstrate how indentation errors can be detected.
Production Debugging at Code Camp PhillyBrian Lyttle
This document provides an introduction to production debugging techniques. It discusses monitoring tools like Task Manager and Performance Monitor, debugging fundamentals like stack traces and crash dumps, protocol analysis, and remote debugging. The goal is to help developers effectively debug problems in production environments using tools that don't require a development workstation.
An important event has taken place in the PVS-Studio analyzer's life: support of C#-code analysis was added in the latest version. As one of its developers, I couldn't but try it on some project. Reading about scanning small and little-known projects is not much interesting of course, so it had to be something popular, and I picked MonoDevelop.
This lab document describes using the Metasploit framework to perform exploits against Windows systems. It consists of six sections: installing Metasploit, adding a remote user to Windows XP, gaining remote command shell access to Windows XP, using DLL injection to open a remote VNC connection, remotely installing a rootkit on Windows, and setting up the Metasploit web interface. The document provides background on exploit frameworks and payloads, and guides students through exercises to complete each section.
Source code recovery is one of the most tedious, and interesting, tasks in reverse engineering. During the course of this talk, the author will talk about a tool being developed (on and off) since last year that aims to generate auto-compilable source code from binaries. The tool is currently working though it needs a lot more work.
This information sheet tells you about the static code analyzer PVS-Studio. PVS-Studio is a tool for bug detection in the source code of programs, written in C, C++ and C#. It works in Windows and Linux environment.
As a PVS-Studio's developer, I am often asked to implement various new diagnostics in our tool. Many of these requests are based on users' experience of working with dynamic code analyzers, for example Valgrind. Unfortunately, it is usually impossible or hardly possible for us to implement such diagnostics. In this article, I'm going to explain briefly why static code analyzers cannot do what dynamic analyzers can and vice versa. Each of these analysis methodologies has its own pros and cons; and one cannot replace the other, but they do complement each other very well.
Detection of vulnerabilities in programs with the help of code analyzersPVS-Studio
Static code analysis tools can help detect vulnerabilities by analyzing source code without executing the program. This document describes 16 such tools, including BOON for buffer overflows, CQual for format string vulnerabilities, MOPS for checking rule compliance, and ITS4, RATS, PScan, and Flawfinder for buffer overflows and format strings. While useful, static tools have limitations and cannot guarantee to find all vulnerabilities. Manual review is still needed to verify results.
This document discusses attacking and exploiting antivirus software. It begins by describing how antivirus engines work and how their functionality can increase vulnerabilities. The document then details initial experiments fuzzing 14 antivirus engines, finding remote and local vulnerabilities. Specific vulnerabilities are listed for various antivirus products. Statistics on fuzzing various engines are provided. The document concludes by discussing remote exploitation of antivirus engines, noting that despite ASLR, many engines still have exploitable issues due to non-ASLR modules or RWX pages. The emulators used by antivirus engines are highlighted as a key part that can bypass some protections.
The document discusses inconsistent changes to code clones at the release level by analyzing two subject systems over multiple releases to detect clones, track clone groups between releases, and identify inconsistent changes in clone groups. It aims to observe the effects of inconsistent changes to clones at the release level since previous work has mainly analyzed inconsistent changes at the revision level.
Our team wrote three articles related to the code analysis of Tizen operating system. The operating system contains a lot of code, so this is the reason why it is a fertile ground for writing different articles. I think that we will go back again to Tizen in future, but right now other interesting projects are waiting for us. So, I will sum up some results of the work done and answer a number of questions that have arisen after the previously published articles.
Creation of a Test Bed Environment for Core Java Applications using White Box...cscpconf
A Test Bed Environment allows for rigorous, transparent, and replicable testing of scientific
theories. However, in software development these test beds can be specified hardware and
software environment for the application under test. Though the existing open source test bed
environments in Integrated Development Environment (IDE)s are capable of supporting the
development of Java application types, test reports are generated by third party developers.
They do not enhance the utility and the performance of the system constructed. Our proposed
system, we have created a customized test bed environment for core java application programs
used to generate the test case report using generated control flow graph. This can be obtained by developing a new mini compiler with additional features.
On the Use of Static Analysis to Safeguard Recursive Dependency ResolutionKamil Jezek
Modern software systems are not developed from
scratch – they rely heavily on the reuse of functionality provided
by libraries. Selecting the right libraries remains a challenging
task. What is more, libraries themselves often depend on other
libraries. Managing these transitive dependencies on libraries is
risky. In this paper, we describe the problems that can occur when
transitive dependencies are resolved automatically using examples
from real-world programs. We then present an empirical study
to assess the extent of the problem when the popular Maven tool
is used, and propose an approach based on static type checking
that can capture many of the problems described at build tim
Understanding Log Lines using Development KnowledgeSAIL_QU
1. Practitioners have challenges understanding log lines and often ask experts for help.
2. The study examined real-life log inquiries from user mailing lists and found they typically ask about the meaning, cause, context, solution or impact.
3. The approach proposes attaching development knowledge like code, commits, issues and comments to logs to help answer these common questions and resolve inquiries.
The document discusses reversing Microsoft patches to reveal vulnerable code. It describes taking a binary difference of files before and after a patch is applied to identify code changes and potential vulnerabilities. This process can be used to create better vulnerability signatures compared to exploit signatures. However, there are challenges to the process like obtaining the correct file versions to compare and dealing with compiler optimizations. Dynamic analysis by setting breakpoints in changed code is also described to help locate where user input is handled to potentially exploit vulnerabilities. The goal is to reveal vulnerable code details to help create vulnerability signatures and verify patches.
At some moment, long ago, we somehow started to cover in our articles any subject but the PVS-Studio tool itself. We told you about the projects we checked and the C++ language's subtle details; we told you how to create plugins in C# or how to launch PVS-Studio from the command line... But PVS-Studio is first of all meant for developers working in Visual Studio. We've done quite a lot to make it easier and more comfortable for them to use our tool. Yet this particular aspect usually stays off screen. Now I decided to improve that and tell you about the PVS-Studio plugin from scratch. If you are a Visual C++ user, this article is for you.
Comparing static analysis in Visual Studio 2012 (Visual C++ 2012) and PVS-StudioPVS-Studio
After Visual Studio 2012 was released with a new static analysis unit included in all of the product's editions, a natural question arises: "Is PVS-Studio still relevant as a static analysis tool or can it be replaced by the tool integrated into VS?". A detailed answer with examples is given in this article. We have performed interface and usability comparison as well as a comparison of error diagnosis strength in real software code. The comparison was carried out on the source code of three open-source projects by id Software: Doom 3, Quake 3: Arena, Wolfenstein: Enemy Territory.
If the coding bug is banal, it doesn't meant it's not crucialPVS-Studio
Spreading the word about PVS-Studio static analyzer, we usually write articles for programmers. However, some things are seen by programmers quite one-sided. That is why there are project managers who can help manage the process of the project development and guide it to the right direction. I decided to write a series of articles, whose target audience is project managers. These articles will help better understand the use of static code analysis methodology. Today we are going to consider a false postulate: "coding errors are insignificant".
Static analysis is most efficient when being used regularly. We'll tell you w...PVS-Studio
The document discusses best practices for using static code analysis tools to maximize their effectiveness. It recommends: 1) Marking false positives to reduce future messages, 2) Using incremental analysis to check modified files, 3) Checking files modified in the last few days, and 4) Running analysis nightly on a build server. Following all recommendations provides the highest return on investment in static analysis by catching errors earlier in development.
Regular use of static code analysis in team developmentPVS-Studio
Static code analysis technologies are used in companies with mature software development processes. However, there might be different levels of using and introducing code analysis tools into a development process: from manual launch of an analyzer "from time to time" or when searching for hard-to-find errors to everyday automatic launch or launch of a tool when adding new source code into the version control system.
PVS-Studio advertisement - static analysis of C/C++ codePVS-Studio
This document advertises the PVS-Studio static analyzer. It describes how using PVS-Studio reduces the number of errors in code of C/C++/C++11 projects and costs on code testing, debugging and maintenance. A lot of examples of errors are cited found by the analyzer in various Open-Source projects. The document describes PVS-Studio at the time of version 4.38 on October 12-th, 2011, and therefore does not describe the capabilities of the tool in the next versions. To learn about new capabilities, visit the product's site http://www.viva64.com or search for an updated version of this article.
Regular use of static code analysis in team developmentPVS-Studio
Static code analysis technologies are used in companies with mature software development processes. However, there might be different levels of using and introducing code analysis tools into a development process: from manual launch of an analyzer "from time to time" or when searching for hard-to-find errors to everyday automatic launch or launch of a tool when adding new source code into the version control system.
The article discusses different levels of using static code analysis technologies in team development and shows how to "move" the process from one level to another. The article refers to the PVS-Studio code analyzer developed by the authors as an example.
Regular use of static code analysis in team developmentAndrey Karpov
Static code analysis technologies are used in companies with mature software development processes. However, there might be different levels of using and introducing code analysis tools into a development process: from manual launch of an analyzer "from time to time" or when searching for hard-to-find errors to everyday automatic launch or launch of a tool when adding new source code into the version control system.
The article discusses different levels of using static code analysis technologies in team development and shows how to "move" the process from one level to another. The article refers to the PVS-Studio code analyzer developed by the authors as an example.
The article describes the testing technologies used when developing PVS-Studio static code analyzer. The developers of the tool for programmers talk about the principles of testing their own program product which can be interesting for the developers of similar packages for processing text data or source code.
Static Analysis: From Getting Started to IntegrationAndrey Karpov
Sometimes, tired of endless code review and debugging, you start wondering if there are ways to make your life easier. After some googling or merely by accident, you stumble upon the phrase, "static analysis". Let's find out what it is and how it can be used in your project.
War of the Machines: PVS-Studio vs. TensorFlowPVS-Studio
The document summarizes the analysis of the TensorFlow machine learning library using the PVS-Studio static code analyzer. Some key findings include:
1. PVS-Studio found 64 instances of false positives related to the DCHECK debugging macro that were suppressed. Explanations of how to address false positives were provided.
2. Various PVS-Studio settings like disabling diagnostics rules and excluding automatically generated files helped filter the analysis output.
3. Genuine errors found include a null pointer dereference that could lead to undefined behavior and a redundant null check.
The document discusses Visual Studio's live static code analysis feature. It explains that this feature analyzes code in real-time as it is written, without requiring compilation, to detect errors and potential issues based on installed code analyzers. The document demonstrates how to install and use code analyzers through examples, showing how analyzers detect issues and provide suggestions to fix problems directly in the code editor through light bulb notifications. It provides a case study walking through fixing various issues detected in sample code using suggestions from an analyzer to iteratively improve the code quality.
The article describes the testing technologies used when developing PVS-Studio static code analyzer. The developers of the tool for programmers talk about the principles of testing their own program product which can be interesting for the developers of similar packages for processing text data or source code.
An ideal static analyzer, or why ideals are unachievablePVS-Studio
Being inspired by Eugene Laspersky's post about an ideal antivirus, I decided to write a similar post about an ideal static analyzer. And meanwhile think how far from being it our PVS-Studio is.
An ideal static code analyzer would have the following characteristics: 100% detection of all errors with 0% false positives, high performance across any operating system or IDE, and the ability to analyze any programming language. However, the author explains that such an ideal is unachievable. Perfect error detection and no false positives are impossible due to limitations in analyzing program logic and constantly evolving code. Wide system and language support requires significant development efforts. Quality customer support and tool maintenance require ongoing funding which supports an annual licensing model rather than one-time free use. While an ideal analyzer is unattainable, the characteristics define goals for product development.
Static analysis as part of the development process in Unreal EnginePVS-Studio
Unreal Engine continues to develop as new code is added and previously written code is changed. What is the inevitable consequence of ongoing development in a project? The emergence of new bugs in the code that a programmer wants to identify as early as possible. One of the ways to reduce the number of errors is the use of a static analyzer like PVS-Studio. Moreover, the analyzer is not only evolving, but also constantly learning to look for new error patterns, some of which we will discuss in this article. If you care about code quality, this article is for you.
PVS-Studio analyzed the Boost library and found 7 potential bugs or issues. The issues included a misprint that caused division by zero, incorrect class member initialization, memory being released incorrectly with auto_ptr, a condition that would always be true due to unsigned socket type, another misprint where a variable wasn't assigned a value, potential for infinite loop when reading from a stream, and suspicious subtraction of identical values. Finding even a small number of issues in a heavily used and reviewed library like Boost demonstrates the tool's effectiveness at static analysis.
New Year PVS-Studio 6.00 Release: Scanning RoslynPVS-Studio
The long wait is finally over. We have released a static code analyzer PVS-Studio 6.00 that supports the analysis of C# projects. It can now analyze projects written in languages C, C++, C++/CLI, C++/CX, and C#. For this release, we have prepared a report based on the analysis of open-source project Roslyn. It is thanks to Roslyn that we were able to add the C# support to PVS-Studio, and we are very grateful to Microsoft for this project.
Searching for bugs in Mono: there are hundreds of them!PVS-Studio
It's very interesting to check large projects. As a rule, we do manage to find unusual and peculiar errors, and tell people about them. Also, it's a great way to test our analyzer and improve all its different aspects. I've long been waiting to check 'Mono'; and finally, I got the opportunity. I should say that this check really proved its worth as I was able to find a lot of entertaining things. This article is about the bugs we found, and several nuances which arose during the check.
How to Improve Visual C++ 2017 Libraries Using PVS-StudioPVS-Studio
The title of this article is a hint for the Visual Studio developers that they could benefit from the use of PVS-Studio static code analyzer. The article discusses the analysis results of the libraries in the recent Visual C++ 2017 release and gives advice on how to improve them and eliminate the bugs found. Read on to find out how the developers of Visual C++ Libraries shoot themselves in the foot: it's going to be interesting and informative.
Similar to Difficulties of comparing code analyzers, or don't forget about usability (20)
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Leveraging the Graph for Clinical Trials and Standards
Difficulties of comparing code analyzers, or don't forget about usability
1. Difficulties of comparing code analyzers,
or don't forget about usability
Authors: Evgeniy Ryzhkov, Andrey Karpov
Date: 31.03.2011
Abstract
Users' desire to compare different code analyzers is natural and understandable. However, it's not so
easy to fulfill this desire as it may seem at first sight. The point is that you don't know what particular
factors must be compared.
Introduction
If we eliminate such quite ridiculous ideas like "we should compare the number of diagnosable errors"
or "we should compare the number of tool-generated messages", then even the reasonable parameter
"signal-to-noise ratio" doesn't seem to be an ideal criterion of estimating code analyzers.
You doubt that it's unreasonable to compare the mentioned parameters? Here you are some examples.
What parameters are just unreasonable to compare
Let's take a simple (at first sight) characteristic like the number of diagnostics. It seems that the more
diagnostics, the better. But the general number of rules doesn't matter for the end user who exploits a
particular set of operating systems and compilers. Diagnostic rules which are relevant to systems,
libraries and compilers he doesn't use won't give him anything useful. They even disturb him
overloading the settings system and documentation, and complicate use and integration of the tool.
Here you an analogy: say, a man comes in a store to buy a heater. He is interested in the domestic
appliances department and it's good if this department has a wide range of goods. But the customer
doesn't need other departments. It's OK if he can buy a inflatable boat, cell phone or chair in this store.
But the inflatable boats department doesn't enlarge the range of heaters anyway.
Take, for instance, the Klockwork tool that supports a lot of various systems, including exotic ones. One
of them has a compiler that easily "swallows" this code:
inline int x;
The Klocwork analyzer has a special diagnostic message to detect this anomaly in code: "The 'inline'
keyword is applied to something other than a function or method". Well, it seems good to have such a
diagnostic. But developers using the Microsoft Visual C++ compiler or any other adequate compiler
won't benefit from this diagnostic anyhow. Visual C++ simply doesn't compile this code: "error C2433: 'x'
: 'inline' not permitted on data declarations".
Another example. Some compilers provide poor support of the bool type. So Klockwork may warn you
when a class member is assigned the bool type: "PORTING.STRUCT.BOOL: This checker detects
situations in which a struct/class has a bool member".
2. "They wrote bool in class! How awful..." It's clear that only few developers will benefit from having this
diagnostic message.
There are plenty of such examples. So it turns out that the number of diagnostic rules in no way is
related to the number of errors an analyzer can detect in a particular project. An analyzer implementing
100 diagnostics and intended for Windows-applications can find much more errors in a project built with
Microsoft Visual Studio than a cross-platform analyzer implementing 1000 diagnostics.
The conclusion is the number of diagnostic rules cannot be relevant when comparing analyzers by
usability.
You may say: "OK, let's compare the number of diagnostics relevant for a particular system then. For
instance, let's single out all the rules to search for errors in Windows-applications". But this approach
doesn't work either. There are two reasons for that:
First, it may be that some diagnostic is implemented in one diagnostic rule in some analyzer and in
several rules in some other analyzer. If you compare them by the number of diagnostics, the latter
analyzer seems better although they both have the same functional to detect a certain type of errors.
Second, implementation of certain diagnostics may be of different quality. For instance, nearly all the
analyzers have the search of "magic numbers". But, say, some analyzer can detect only magic numbers
dangerous from the viewpoint of code migration to 64-bit systems (4, 8, 32, etc) and some other simply
detects all the magic numbers (1, 2, 3, etc). So it won't do if we only write a plus mark for each analyzer
in the comparison table.
They also like to take the characteristic of tool's speed or number of code lines processessed per second.
But it's unreasonable from the viewpoint of practice either. There is no relation between the speed of a
code analyzer and speed of analysis performed by man! First, code analysis is often launched
automatically during night builds. You just must "be in time" for the morning. And second, they often
forget about the usability parameter when comparing analyzers. Well, let's study this issue in detail.
Tool's usability is very important for adequate comparison
The point is that usability of a tool influences the practice of real use of code analyzers very much...
We have checked the eMule project recently with two code analyzers estimating the convenience of this
operation in each case. One of the tools was a static analyzer integrated into some Visual Studio
editions. The second analyzer was our PVS-Studio. We at once encountered several issues when
handling the code analyzer integrated into Visual Studio. And those issues did not relate to the analysis
quality itself or speed.
The first issue is that you cannot save a list of analyzer-generated messages for further examination. For
instance, while checking eMule with the integrated analyzer, I got two thousand messages. No one can
thoroughly investigate them all at once, so you have to examine them for several days. But the
impossibility to save analysis results causes me to re-analyze the project each time, which tires me very
much. PVS-Studio allows you to save analysis results for you to continue examining them later.
The second issue is about the way how processing of duplicate analyzer-messages is implemented. I
mean diagnosis of problems in header files (.h-files). Say the analyzer has detected an issue in an .h-file
included into ten .cpp-files. While analyzing each of these ten .cpp-files, the Visual Studio-integrated
3. analyzer produces the same message about the issue in the .h-file ten times! Here you are a real sample.
The following message was generated more than ten times while checking eMule:
c:usersevgdocumentsemuleplusdialogmintraybtn.hpp(450):
warning C6054: String 'szwThemeColor' might not be zero-terminated:
Lines: 434, 437, 438, 443, 445, 448, 450
Because of this, analysis results get messy and you have to review almost the same messages. I should
say, PVS-Studio has been filtering duplicate messages instead of showing them to user since the very
beginning.
The third issue is generation of messages on issues in plug-in files (from folders like C:Program Files
(x86)Microsoft Visual Studio 10.0VCinclude). The analyzer built into Visual Studio is not ashamed to
attaint system header files although there is little sense in it. Again, here you are an example. We got
several times one and the same message about system files while checking eMule:
1>c:program files (x86)microsoft
sdkswindowsv7.0aincludews2tcpip.h(729):
warning C6386: Buffer overrun: accessing 'argument 1',
the writable size is '1*4' bytes,
but '4294967272' bytes might be written:
Lines: 703, 704, 705, 707, 713, 714, 715, 720,
721, 722, 724, 727, 728, 729
Nobody will ever edit system files. What for to "curse" them? PVS-Studio has never done that.
Into the same category we can place the impossibility to tell the analyzer not to perform mask-check of
certain files, for instance, all the files "*_generated.cpp" or "c:libs". You may specify exception files in
PVS-Studio.
The fourth issue relates to the very process of handling the list of analyzer-generated messages. Of
course, you may disable any diagnostic messages by code in any code analyzer. But it can be done at
different convenience levels. To be more exact, the question is: should analysis be relaunched to hide
unnecessary messages by code or not. In the Visual-Studio-integrated analyzer, you must rewrite codes
of messages to be disabled in the project's settings and relaunch the analysis. Sure, you hardly can
specify all the "unnecessary" diagnostics, so you will have to relaunch the analysis several times. In PVS-
Studio, you can easily hide and reveal messages by code without relaunching the analysis, which is much
more convenient.
The fifth issue is filtering of messages not only by code but by text as well. For instance, it might be
useful to hide all the messages containing "printf". The analyzer integrated into Visual Studio doesn't
have this feature while PVS-Studio has it.
Finally, the sixth issue is convenience of specifying false alarms to the tool. The #pragma warning disable
mechanism employed in Visual Studio lets you hide a message only relaunching the analysis. The
4. mechanism in PVS-Studio lets you mark messages as "False Alarm" and hide them without relaunching
the analysis.
All the six above mentioned issues don't relate to code analysis itself yet they are very important since
usability of a tool is that very integral index showing whether it will come to estimating analysis quality
at all.
Let's see what we've got. The static analyzer integrated into Visual Studio checks the eMule project
several times quicker than PVS-Studio. But it took us 3 days to complete work with the Visual Studio's
analyzer (actually it was less but we had to switch to other tasks to have a rest). PVS-Studio took us only
4 hours to complete the work.
Note. What the quantity of errors found is concerned - the both analyzers have shown almost the same
results and found the same errors.
Summary
Comparison of two static analyzers is a very difficult and complex task. And there is no answer to the
question what tool is the best IN GENERAL. You can only speak of what tool is better for a particular
project and user.