The document discusses different tools for analyzing iOS code for bugs and performance issues. It describes the static analyzer as the first line of defense, which finds logic flaws and coding issues by analyzing code similar to a compiler. It then discusses using the Instruments tool on the simulator as the second line of defense for finding memory leaks and performance bottlenecks. Specific Instruments like Allocations, Leaks, and Time Profiler are described for analyzing different types of issues. Command line tools for configuring which static analyzer version is used are also overviewed.
3. Static Analyzer
First line of defense
Built-in based on open source Clang Static Analyzer
Finds bug in C and Objective-C programs
Works like a compiler looking for logic flaws and instances where best coding
practices are not followed
Good for cutting down unused variables and Other small memory management
issues.
4. Static Analyzer - Options
1. By default, Xcode uses the version of clang that came bundled with it to analyze
code
2. Open Source Analyzer builds
5. Open Source Builds - Advantages
Newer than the analyzer provided
Contains bug fixes
New Checks
Better analysis
6. Static Analyzer – Command Line Utility
• set-xcode-analyzer
It allows the user to change what copy of clang that Xcode uses for analysis
Terminal :
$ set-xcode-analyzer -h
Usage: set-xcode-analyzer [options]
Options:
-h, --help show this help message and exit
--use-checker-build=PATH
Use the Clang located at the provided absolute path,
e.g. /Users/foo/checker-1
--use-xcode-clang Use the Clang bundled with Xcode
7. Modes of set-xcode-analyzer
• --use-xcode-clang
Switch to Xcode to using the clang that came bundled with it for static analysis
• --use-checker-build
Switch Xcode to using the clang provided by the specified analyzer build
Things to keep Mind :
1. We should quit Xcode prior to running set-xcode-analyzer
2. We need to run set-xcode-analyzer in sudo in order to have write privileges to modify Xcode
configuration files
8. Examples
1. Example 1:
Telling Xcode to use a very specific version of clang
Terminal :
$ sudo set-xcode-analyzer --use-checker-build=~/mycrazyclangbuild/bin/clang
2. Example 2:
Telling Xcode to use its default clang analyzer
Terminal :
$ sudo set-xcode-analyzer --use-xcode-clang
10. Types of Memory Leaks
• True Memory Leak : It is where object has not yet been de-allocated but no longer
referenced by anything. Therefore the memory can never be re-used.
• Unbounded Memory growth : It happens where memory continues to be allocated
and never ever given a chance to be de-allocated.
11. Different Instruments
• Allocations
It used to take snapshot of the heap as apps perform its tasks
For unbounded memory growth kind of situations , allocations instrument is to be used.
Test Case : Do something in the app and then undo that something , returning the state of the app to its
prior point . If the memory allocated in the heap is still the same , no worries. It’s simple and repeatable
test scenario of performing the task and returning the app to its state prior to performing the task.
12. • Leaks
For true memory leak kind of situtations , the leaks instruments is used.
Test Case : The most common situation where this occurs is with buried or overly complex logic that’s
supposed to release memory, but under certain circumstances doesn’t get executed. These memory
leaks can lead to the app crashing or being shut down. If an app is holding on to too much memory
when the user decides to suspend the app, the watchdog may have no choice but to quit the app in
order to free memory. By keeping a lean application, the chances of this happening are minimized.
13. • Time Profiler
Premature optimization i.e spending time optimizing the bits of code that rarely don’t matter in the end,
issues are tackled by time profiler.
Apple recommends developers try to perform time measurements on the slowest supported device
It allows developers to prioritize which bit of logic needs to be refactored prior to release. Some things may
not be fixable, but it’s possible other factors could be reviewed in order to see if there’s a better way to
address the issue at hand, possibly by moving logic off the main thread using Blocks and Grand Central
Dispatch
14. Pick What you Want
Goto Product -> Profile, we are presented with plate full of options
15. Terminologies
Separate by Thread: Each thread should be considered separately. This enables you to understand
which threads are responsible for the greatest amount of CPU use.
Invert Call Tree: With this option, the stack trace is considered from top to bottom. This means that
you will see the methods in the table that would have been in frame 0 when the sample was taken.
This is usually what you want, as you want to see the deepest methods where the CPU is spending
its time.
Hide Missing Symbols: If the dSYM file cannot be found for your app or a system framework, then
instead of seeing method names (symbols) in the table, you’ll just see hex values. These correspond
to the address of the instruction within the binary code. If this option is selected, then these are
hidden, and only fully resolved symbols are displayed. This helps to declutter the data presented.
Hide System Libraries: When this option is selected, only symbols from your own app are displayed.
It’s often useful to select this option, since usually you only care about where the CPU is spending
time in your own code – you can’t do much about how much CPU the system libraries are using!
Show Obj-C Only: If this is selected, then only Objective-C methods are displayed, rather than any C
or C++ functions. There are none in your program, but if you were looking at an OpenGL app, it
might have some C++, for example.
Flatten Recursion: This option treats recursive functions (ones which call themselves) as one entry in
each stack trace, rather than multiple.
Top Functions: Enabling this makes Instruments consider the total time spent in a function as the
sum of the time directly within that function, as well as the time spent in functions called by that
function. So if function A calls B, then A’s time is reported as the time spent in A PLUS the time spent
in B. This can be really useful, as it lets you pick the largest time figure each time you descend into
the call stack, zeroing in on your most time-consuming methods.